141 resultados para typological classification of languages


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the UK, Singapore, Canada, New Zealand and Australia, as in many other jurisdictions, charity law is rooted in the common law and anchored on the Statute of Charitable Uses 1601. The Pemsel classification of charitable purposes was uniformly accepted, and together with a shared and growing pool of judicial precedents, aided by the ‘spirit and intendment’ rule, has subsequently allowed the law to develop along much the same lines. In recent years, all the above jurisdictions have embarked on law reform processes designed to strengthen regulatory processes and to statutorily define and encode common law concepts. The reform outcomes are now to be found in a batch of national charity statutes which reflect interesting differences in the extent to which their respective governments have been prepared to balance the modernising of charitable purposes and other common law concepts alongside the customary concern to tighten the regulatory framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. Methods: A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. Results: In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most ‘under-coded’ by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Conclusion: Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genomic and proteomic analyses have attracted a great deal of interests in biological research in recent years. Many methods have been applied to discover useful information contained in the enormous databases of genomic sequences and amino acid sequences. The results of these investigations inspire further research in biological fields in return. These biological sequences, which may be considered as multiscale sequences, have some specific features which need further efforts to characterise using more refined methods. This project aims to study some of these biological challenges with multiscale analysis methods and stochastic modelling approach. The first part of the thesis aims to cluster some unknown proteins, and classify their families as well as their structural classes. A development in proteomic analysis is concerned with the determination of protein functions. The first step in this development is to classify proteins and predict their families. This motives us to study some unknown proteins from specific families, and to cluster them into families and structural classes. We select a large number of proteins from the same families or superfamilies, and link them to simulate some unknown large proteins from these families. We use multifractal analysis and the wavelet method to capture the characteristics of these linked proteins. The simulation results show that the method is valid for the classification of large proteins. The second part of the thesis aims to explore the relationship of proteins based on a layered comparison with their components. Many methods are based on homology of proteins because the resemblance at the protein sequence level normally indicates the similarity of functions and structures. However, some proteins may have similar functions with low sequential identity. We consider protein sequences at detail level to investigate the problem of comparison of proteins. The comparison is based on the empirical mode decomposition (EMD), and protein sequences are detected with the intrinsic mode functions. A measure of similarity is introduced with a new cross-correlation formula. The similarity results show that the EMD is useful for detection of functional relationships of proteins. The third part of the thesis aims to investigate the transcriptional regulatory network of yeast cell cycle via stochastic differential equations. As the investigation of genome-wide gene expressions has become a focus in genomic analysis, researchers have tried to understand the mechanisms of the yeast genome for many years. How cells control gene expressions still needs further investigation. We use a stochastic differential equation to model the expression profile of a target gene. We modify the model with a Gaussian membership function. For each target gene, a transcriptional rate is obtained, and the estimated transcriptional rate is also calculated with the information from five possible transcriptional regulators. Some regulators of these target genes are verified with the related references. With these results, we construct a transcriptional regulatory network for the genes from the yeast Saccharomyces cerevisiae. The construction of transcriptional regulatory network is useful for detecting more mechanisms of the yeast cell cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation discusses some of the general issues relating to the classification of UAS for the purposes of defining and promulgating safety regulations. One possible approach for the definition of a classification scheme for UAS Type Certification Categories reviewed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. This paper proposes two inspection modules for an automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localisation and segmentation. The “back-end” inspection involves the classification of solder joints using the Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. The Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. This system could contribute to the development of automated non-contact, non-destructive and low cost solder joint quality inspection systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The vast sequence divergence among different virus groups has presented a great challenge to alignment-based analysis of virus phylogeny. Due to the problems caused by the uncertainty in alignment, existing tools for phylogenetic analysis based on multiple alignment could not be directly applied to the whole-genome comparison and phylogenomic studies of viruses. There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among the alignment-free methods, a dynamical language (DL) method proposed by our group has successfully been applied to the phylogenetic analysis of bacteria and chloroplast genomes. Results In this paper, the DL method is used to analyze the whole-proteome phylogeny of 124 large dsDNA viruses and 30 parvoviruses, two data sets with large difference in genome size. The trees from our analyses are in good agreement to the latest classification of large dsDNA viruses and parvoviruses by the International Committee on Taxonomy of Viruses (ICTV). Conclusions The present method provides a new way for recovering the phylogeny of large dsDNA viruses and parvoviruses, and also some insights on the affiliation of a number of unclassified viruses. In comparison, some alignment-free methods such as the CV Tree method can be used for recovering the phylogeny of large dsDNA viruses, but they are not suitable for resolving the phylogeny of parvoviruses with a much smaller genome size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smut fungi are important pathogens of grasses, including the cultivated crops maize, sorghum and sugarcane. Typically, smut fungi infect the inflorescence of their host plants. Three genera of smut fungi (Ustilago, Sporisorium and Macalpinomyces) form a complex with overlapping morphological characters, making species placement problematic. For example, the newly described Macalpinomyces mackinlayi possesses a combination of morphological characters such that it cannot be unambiguously accommodated in any of the three genera. Previous attempts to define Ustilago, Sporisorium and Macalpinomyces using morphology and molecular phylogenetics have highlighted the polyphyletic nature of the genera, but have failed to produce a satisfactory taxonomic resolution. A detailed systematic study of 137 smut species in the Ustilago-Sporisorium- Macalpinomyces complex was completed in the current work. Morphological and DNA sequence data from five loci were assessed with maximum likelihood and Bayesian inference to reconstruct a phylogeny of the complex. The phylogenetic hypotheses generated were used to identify morphological synapomorphies, some of which had previously been dismissed as a useful way to delimit the complex. These synapomorphic characters are the basis for a revised taxonomic classification of the Ustilago-Sporisorium-Macalpinomyces complex, which takes into account their morphological diversity and coevolution with their grass hosts. The new classification is based on a redescription of the type genus Sporisorium, and the establishment of four genera, described from newly recognised monophyletic groups, to accommodate species expelled from Sporisorium. Over 150 taxonomic combinations have been proposed as an outcome of this investigation, which makes a rigorous and objective contribution to the fungal systematics of these important plant pathogens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We read the excellent review of telemonitoring in chronic heart failure (CHF)1 with interest and commend the authors on the proposed classification of telemedical remote management systems according to the type of data transfer, decision ability and level of integration. However, several points require clarification in relation to our Cochrane review of telemonitoring and structured telephone support2. We included a study by Kielblock3. We corresponded directly with this study team specifically to find out whether or not this was a randomised study and were informed that it was a randomised trial, albeit by date of birth. We note in our review2 that this randomisation method carries a high risk of bias. Post-hoc metaanalyses without these data demonstrate no substantial change to the effect estimates for all cause mortality (original risk ratio (RR) 0·66 [95% CI 0·54, 0·81], p<0·0001; revised RR 0·72 [95% CI 0·57, 0·92], p=0·008), all-cause hospitalisation (original RR 0·91 [95% CI 0·84, 0·99] p=0·02; revised RR 0.92 [95% CI 0·84, 1·02], p=0·10 ) or CHF-related hospitalisation (original RR 0·79 [95% CI 0·67, 0·94] p=0·008; revised RR 0·75 [95% CI 0·60, 0·94] p=0·01). Secondly, we would classify the Tele-HF study4, 5 as structured telephone support, rather than telemonitoring. Again, inclusion of these data alters the point-estimate but not the overall result of the meta-analyses4. Finally, our review2 does not include invasive telemonitoring as the search strategy was not designed to capture these studies. Therefore direct comparison of our review findings with recent studies of these interventions is not recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Staphylococci are important pathogenic bacteria responsible for a range of diseases in humans. The most frequently isolated microorganisms in a hospital microbiology laboratory are staphylococci. The general classification of staphylococci divides them into two major groups; Coagulase-positive staphylococci (e.g. Staphylococcus aureus) and Coagulase-negative staphylococci (e.g. Staphylococcus epidermidis). Coagulase-negative staphylococcal (CoNS) isolates include a variety of species and many different strains but are often dominated by the most important organism of this group, S. epidermidis. Currently, these organisms are regarded as important pathogenic organisms causing infections related to prosthetic materials and surgical wounds. A significant number of S. epidermidis isolates are also resistant to different antimicrobial agents. Virulence factors in CoNS are not very clearly established and not well documented. S. epidermidis is evolving as a resistant and powerful microbe related to nosocomial infections because it has different properties which independently, and in combination, make it a successful infectious agent, especially in the hospital environment. Such characteristics include biofilm formation, drug resistance and the evolution of genetic variables. The purpose of this project was to develop a novel SNP genotyping method to genotype S. epidermidis strains originating from hospital patients and healthy individuals. High-Resolution Melt Analysis was used to assign binary typing profiles to both clinical and commensal strains using a new bioinformatics approach. The presence of antibiotic resistance genes and biofilm coding genes were also interrogated in these isolates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Web search engines are frequently used by people to locate information on the Internet. However, not all queries have an informational goal. Instead of information, some people may be looking for specific web sites or may wish to conduct transactions with web services. This paper aims to focus on automatically classifying the different user intents behind web queries. Design/methodology/approach: For the research reported in this paper, 130,000 web search engine queries are categorized as informational, navigational, or transactional using a k-means clustering approach based on a variety of query traits. Findings: The research findings show that more than 75 percent of web queries (clustered into eight classifications) are informational in nature, with about 12 percent each for navigational and transactional. Results also show that web queries fall into eight clusters, six primarily informational, and one each of primarily transactional and navigational. Research limitations/implications: This study provides an important contribution to web search literature because it provides information about the goals of searchers and a method for automatically classifying the intents of the user queries. Automatic classification of user intent can lead to improved web search engines by tailoring results to specific user needs. Practical implications: The paper discusses how web search engines can use automatically classified user queries to provide more targeted and relevant results in web searching by implementing a real time classification method as presented in this research. Originality/value: This research investigates a new application of a method for automatically classifying the intent of user queries. There has been limited research to date on automatically classifying the user intent of web queries, even though the pay-off for web search engines can be quite beneficial. © Emerald Group Publishing Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The International Classification of Diseases (ICD) is used to categorise diseases, injuries and external causes, and is a key epidemiological tool enabling the storage and retrieval of data from health and vital records to produce core international mortality and morbidity statistics. The ICD is updated periodically to ensure the classification remains current and work is now underway to develop the next revision, ICD-11. There have been almost 20 years since the last ICD edition was published and over 60 years since the last substantial structural revision of the external causes chapter. Revision of such a critical tool requires transparency and documentation to ensure that changes made to the classification system are recorded comprehensively for future reference. In this paper, the authors provide a history of external causes classification development and outline the external cause structure. Approaches to manage ICD-10 deficiencies are discussed and the ICD-11 revision approach regarding the development of, rationale for and implications of proposed changes to the chapter are outlined. Through improved capture of external cause concepts in ICD-11, a stronger evidence base will be available to inform injury prevention, treatment, rehabilitation and policy initiatives to ultimately contribute to a reduction in injury morbidity and mortality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure project quality and reliability. This paper proposes the use of the Log-Gabor filter bank, Discrete Wavelet Transform and Discrete Cosine Transform for feature extraction of solder joint images on Printed Circuit Boards (PCBs). A distance based on the Mahalanobis Cosine metric is also presented for classification of five different types of solder joints. From the experimental results, this methodology achieved high accuracy and a well generalised performance. This can be an effective method to reduce cost and improve quality in the production of PCBs in the manufacturing industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Language-use has proven to be the most complex and complicating of all Internet features, yet people and institutions invest enormously in language and crosslanguage features because they are fundamental to the success of the Internet’s past, present and future. The thesis takes into focus the developments of the latter – features that facilitate and signify linking between or across languages – both in their historical and current contexts. In the theoretical analysis, the conceptual platform of inter-language linking is developed to both accommodate efforts towards a new social complexity model for the co-evolution of languages and language content, as well as to create an open analytical space for language and cross-language related features of the Internet and beyond. The practiced uses of inter-language linking have changed over the last decades. Before and during the first years of the WWW, mechanisms of inter-language linking were at best important elements used to create new institutional or content arrangements, but on a large scale they were just insignificant. This has changed with the emergence of the WWW and its development into a web in which content in different languages co-evolve. The thesis traces the inter-language linking mechanisms that facilitated these dynamic changes by analysing what these linking mechanisms are, how their historical as well as current contexts can be understood and what kinds of cultural-economic innovation they enable and impede. The study discusses this alongside four empirical cases of bilingual or multilingual media use, ranging from television and web services for languages of smaller populations, to large-scale, multiple languages involving web ventures by the British Broadcasting Corporation, the Special Broadcasting Service Australia, Wikipedia and Google. To sum up, the thesis introduces the concepts of ‘inter-language linking’ and the ‘lateral web’ to model the social complexity and co-evolution of languages online. The resulting model reconsiders existing social complexity models in that it is the first that can explain the emergence of large-scale, networked co-evolution of languages and language content facilitated by the Internet and the WWW. Finally, the thesis argues that the Internet enables an open space for language and crosslanguage related features and investigates how far this process is facilitated by (1) amateurs and (2) human-algorithmic interaction cultures.