135 resultados para Heteroskedasticity-based identification
Resumo:
The first use of computing technologies and the development of land use models in order to support decision-making processes in urban planning date back to as early as mid 20th century. The main thrust of computing applications in urban planning is their contribution to sound decision-making and planning practices. During the last couple of decades many new computing tools and technologies, including geospatial technologies, are designed to enhance planners' capability in dealing with complex urban environments and planning for prosperous and healthy communities. This chapter, therefore, examines the role of information technologies, particularly internet-based geographic information systems, as decision support systems to aid public participatory planning. The chapter discusses challenges and opportunities for the use of internet-based mapping application and tools in collaborative decision-making, and introduces a prototype internet-based geographic information system that is developed to integrate public-oriented interactive decision mechanisms into urban planning practice. This system, referred as the 'Community-based Internet GIS' model, incorporates advanced information technologies, distance learning, sustainable urban development principles and community involvement techniques in decision-making processes, and piloted in Shibuya, Tokyo, Japan.
Resumo:
Nuclear Factor Y (NF-Y) is a trimeric complex that binds to the CCAAT box, a ubiquitous eukaryotic promoter element. The three subunits NF-YA, NF-YB and NF-YC are represented by single genes in yeast and mammals. However, in model plant species (Arabidopsis and rice) multiple genes encode each subunit providing the impetus for the investigation of the NF-Y transcription factor family in wheat. A total of 37 NF-Y and Dr1 genes (10 NF-YA, 11 NF-YB, 14 NF-YC and 2 Dr1) in Triticum aestivum were identified in the global DNA databases by computational analysis in this study. Each of the wheat NF-Y subunit families could be further divided into 4-5 clades based on their conserved core region sequences. Several conserved motifs outside of the NF-Y core regions were also identified by comparison of NF-Y members from wheat, rice and Arabidopsis. Quantitative RT-PCR analysis revealed that some of the wheat NF-Y genes were expressed ubiquitously, while others were expressed in an organ-specific manner. In particular, each TaNF-Y subunit family had members that were expressed predominantly in the endosperm. The expression of nine NF-Y and two Dr1 genes in wheat leaves appeared to be responsive to drought stress. Three of these genes were up-regulated under drought conditions, indicating that these members of the NF-Y and Dr1 families are potentially involved in plant drought adaptation. The combined expression and phylogenetic analyses revealed that members within the same phylogenetic clade generally shared a similar expression profile. Organ-specific expression and differential response to drought indicate a plant-specific biological role for various members of this transcription factor family.
Resumo:
Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs
An approach to statistical lip modelling for speaker identification via chromatic feature extraction
Resumo:
This paper presents a novel technique for the tracking of moving lips for the purpose of speaker identification. In our system, a model of the lip contour is formed directly from chromatic information in the lip region. Iterative refinement of contour point estimates is not required. Colour features are extracted from the lips via concatenated profiles taken around the lip contour. Reduction of order in lip features is obtained via principal component analysis (PCA) followed by linear discriminant analysis (LDA). Statistical speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments performed on the M2VTS1 database, show encouraging results
Resumo:
Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.
Resumo:
Microbial pollution in water periodically affects human health in Australia, particularly in times of drought and flood. There is an increasing need for the control of waterborn microbial pathogens. Methods, allowing the determination of the origin of faecal contamination in water, are generally referred to as Microbial Source Tracking (MST). Various approaches have been evaluated as indicatorsof microbial pathogens in water samples, including detection of different microorganisms and various host-specific markers. However, until today there have been no universal MST methods that could reliably determine the source (human or animal) of faecal contamination. Therefore, the use of multiple approaches is frequently advised. MST is currently recognised as a research tool, rather than something to be included in routine practices. The main focus of this research was to develop novel and universally applicable methods to meet the demands for MST methods in routine testing of water samples. Escherichia coli was chosen initially as the object organism for our studies as, historically and globally, it is the standard indicator of microbial contamination in water. In this thesis, three approaches are described: single nucleotide polymorphism (SNP) genotyping, clustered regularly interspaced short palindromic repeats (CRISPR) screening using high resolution melt analysis (HRMA) methods and phage detection development based on CRISPR types. The advantage of the combination SNP genotyping and CRISPR genes has been discussed in this study. For the first time, a highly discriminatory single nucleotide polymorphism interrogation of E. coli population was applied to identify the host-specific cluster. Six human and one animal-specific SNP profile were revealed. SNP genotyping was successfully applied in the field investigations of the Coomera watershed, South-East Queensland, Australia. Four human profiles [11], [29], [32] and [45] and animal specific SNP profile [7] were detected in water. Two human-specific profiles [29] and [11] were found to be prevalent in the samples over a time period of years. The rainfall (24 and 72 hours), tide height and time, general land use (rural, suburban), seasons, distance from the river mouth and salinity show a lack of relashionship with the diversity of SNP profiles present in the Coomera watershed (p values > 0.05). Nevertheless, SNP genotyping method is able to identify and distinquish between human- and non-human specific E. coli isolates in water sources within one day. In some samples, only mixed profiles were detected. To further investigate host-specificity in these mixed profiles CRISPR screening protocol was developed, to be used on the set of E. coli, previously analysed for SNP profiles. CRISPR loci, which are the pattern of previous DNA coliphages attacks, were considered to be a promising tool for detecting host-specific markers in E. coli. Spacers in CRISPR loci could also reveal the dynamics of virulence in E. coli as well in other pathogens in water. Despite the fact that host-specificity was not observed in the set of E. coli analysed, CRISPR alleles were shown to be useful in detection of the geographical site of sources. HRMA allows determination of ‘different’ and ‘same’ CRISPR alleles and can be introduced in water monitoring as a cost-effective and rapid method. Overall, we show that the identified human specific SNP profiles [11], [29], [32] and [45] can be useful as marker genotypes globally for identification of human faecal contamination in water. Developed in the current study, the SNP typing approach can be used in water monitoring laboratories as an inexpensive, high-throughput and easy adapted protocol. The unique approach based on E. coli spacers for the search for unknown phage was developed to examine the host-specifity in phage sequences. Preliminary experiments on the recombinant plasmids showed the possibility of using this method for recovering phage sequences. Future studies will determine the host-specificity of DNA phage genotyping as soon as first reliable sequences can be acquired. No doubt, only implication of multiple approaches in MST will allow identification of the character of microbial contamination with higher confidence and readability.
Resumo:
Recent studies on automatic new topic identification in Web search engine user sessions demonstrated that neural networks are successful in automatic new topic identification. However most of this work applied their new topic identification algorithms on data logs from a single search engine. In this study, we investigate whether the application of neural networks for automatic new topic identification are more successful on some search engines than others. Sample data logs from the Norwegian search engine FAST (currently owned by Overture) and Excite are used in this study. Findings of this study suggest that query logs with more topic shifts tend to provide more successful results on shift-based performance measures, whereas logs with more topic continuations tend to provide better results on continuation-based performance measures.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
Successful identification and exploitation of opportunities has been an area of interest to many entrepreneurship researchers. Since Shane and Venkataraman’s seminal work (e.g. Shane and Venkataraman, 2000; Shane, 2000), several scholars have theorised on how firms identify, nurture and develop opportunities. The majority of this literature has been devoted to understanding how entrepreneurs search for new applications of their technological base or discover opportunities based on prior knowledge (Zahra, 2008; Sarasvathy et al., 2003). In particular, knowledge about potential customer needs and problems that may present opportunities is vital (Webb et al., 2010). Whereas the role of prior knowledge of customer problems (Shane, 2003; Shepherd and DeTienne, 2005) and positioning oneself in a so-called knowledge corridor (Fiet, 1996) has been researched, the role of opportunity characteristics and their interaction with customer-related mechanisms that facilitate and hinder opportunity identification has received scant attention.
Resumo:
Road dust contain potentially toxic pollutants originating from a range of anthropogenic sources common to urban land uses and soil inputs from surrounding areas. The research study analysed the mineralogy and morphology of dust samples from road surfaces from different land uses and background soil samples to characterise the relative source contributions to road dust. The road dust consist primarily of soil derived minerals (60%) with quartz averaging 40-50% and remainder being clay forming minerals of albite, microcline, chlorite and muscovite originating from surrounding soils. About 2% was organic matter primarily originating from plant matter. Potentially toxic pollutants represented about 30% of the build-up. These pollutants consist of brake and tire wear, combustion emissions and fly ash from asphalt. Heavy metals such as Zn, Cu, Pb, Ni, Cr and Cd primarily originate from vehicular traffic while Fe, Al and Mn primarily originate from surrounding soils. The research study confirmed the significant contribution of vehicular traffic to dust deposited on urban road surfaces.
Resumo:
Significant numbers of children are severely abused and neglected by parents and caregivers. Infants and very young children are the most vulnerable and are unable to seek help. To identify these situations and enable child protection and the provision of appropriate assistance, many jurisdictions have enacted ‘mandatory reporting laws’ requiring designated professionals such as doctors, nurses, police and teachers to report suspected cases of severe child abuse and neglect. Other jurisdictions have not adopted this legislative approach, at least partly motivated by a concern that the laws produce dramatic increases in unwarranted reports, which, it is argued, lead to investigations which infringe on people’s privacy, cause trauma to innocent parents and families, and divert scarce government resources from deserving cases. The primary purpose of this paper is to explore the extent to which opposition to mandatory reporting laws is valid based on the claim that the laws produce ‘overreporting’. The first part of this paper revisits the original mandatory reporting laws, discusses their development into various current forms, explains their relationship with policy and common law reporting obligations, and situates them in the context of their place in modern child protection systems. This part of the paper shows that in general, contemporary reporting laws have expanded far beyond their original conceptualisation, but that there is also now a deeper understanding of the nature, incidence, timing and effects of different types of severe maltreatment, an awareness that the real incidence of maltreatment is far higher than that officially recorded, and that there is strong evidence showing the majority of identified cases of severe maltreatment are the result of reports by mandated reporters. The second part of this paper discusses the apparent effect of mandatory reporting laws on ‘overreporting’ by referring to Australian government data about reporting patterns and outcomes, with a particular focus on New South Wales. It will be seen that raw descriptive data about report numbers and outcomes appear to show that reporting laws produce both desirable consequences (identification of severe cases) and problematic consequences (increased numbers of unsubstantiated reports). Yet, to explore the extent to which the data supports the overreporting claim, and because numbers of unsubstantiated reports alone cannot demonstrate overreporting, this part of the paper asks further questions of the data. Who makes reports, about which maltreatment types, and what are the outcomes of those reports? What is the nature of these reports; for example, to what extent are multiple numbers of reports made about the same child? What meaning can be attached to an ‘unsubstantiated’ report, and can such reports be used to show flaws in reporting effectiveness and problems in reporting laws? It will be suggested that available evidence from Australia is not sufficiently detailed or strong to demonstrate the overreporting claim. However, it is also apparent that, whether adopting an approach based on public health and or other principles, much better evidence about reporting needs to be collected and analyzed. As well, more nuanced research needs to be conducted to identify what can reasonably be said to constitute ‘overreports’, and efforts must be made to minimize unsatisfactory reporting practice, informed by the relevant jurisdiction’s context and aims. It is also concluded that, depending on the jurisdiction, the available data may provide useful indicators of positive, negative and unanticipated effects of specific components of the laws, and of the strengths, weaknesses and needs of the child protection system.
Resumo:
This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.
Resumo:
The study presents a multi-layer genetic algorithm (GA) approach using correlation-based methods to facilitate damage determination for through-truss bridge structures. To begin, the structure’s damage-suspicious elements are divided into several groups. In the first GA layer, the damage is initially optimised for all groups using correlation objective function. In the second layer, the groups are combined to larger groups and the optimisation starts over at the normalised point of the first layer result. Then the identification process repeats until reaching the final layer where one group includes all structural elements and only minor optimisations are required to fine tune the final result. Several damage scenarios on a complicated through-truss bridge example are nominated to address the proposed approach’s effectiveness. Structural modal strain energy has been employed as the variable vector in the correlation function for damage determination. Simulations and comparison with the traditional single-layer optimisation shows that the proposed approach is efficient and feasible for complicated truss bridge structures when the measurement noise is taken into account.