863 resultados para Gaussian Plume model for multiple sources foe Cochin
Resumo:
This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.
Resumo:
The aim of this paper is to demonstrate the validity of using Gaussian mixture models (GMM) for representing probabilistic distributions in a decentralised data fusion (DDF) framework. GMMs are a powerful and compact stochastic representation allowing efficient communication of feature properties in large scale decentralised sensor networks. It will be shown that GMMs provide a basis for analytical solutions to the update and prediction operations for general Bayesian filtering. Furthermore, a variant on the Covariance Intersect algorithm for Gaussian mixtures will be presented ensuring a conservative update for the fusion of correlated information between two nodes in the network. In addition, purely visual sensory data will be used to show that decentralised data fusion and tracking of non-Gaussian states observed by multiple autonomous vehicles is feasible.
Resumo:
Distributed pipeline assets systems are crucial to society. The deterioration of these assets and the optimal allocation of limited budget for their maintenance correspond to crucial challenges for water utility managers. Decision makers should be assisted with optimal solutions to select the best maintenance plan concerning available resources and management strategies. Much research effort has been dedicated to the development of optimal strategies for maintenance of water pipes. Most of the maintenance strategies are intended for scheduling individual water pipe. Consideration of optimal group scheduling replacement jobs for groups of pipes or other linear assets has so far not received much attention in literature. It is a common practice that replacement planners select two or three pipes manually with ambiguous criteria to group into one replacement job. This is obviously not the best solution for job grouping and may not be cost effective, especially when total cost can be up to multiple million dollars. In this paper, an optimal group scheduling scheme with three decision criteria for distributed pipeline assets maintenance decision is proposed. A Maintenance Grouping Optimization (MGO) model with multiple criteria is developed. An immediate challenge of such modeling is to deal with scalability of vast combinatorial solution space. To address this issue, a modified genetic algorithm is developed together with a Judgment Matrix. This Judgment Matrix is corresponding to various combinations of pipe replacement schedules. An industrial case study based on a section of a real water distribution network was conducted to test the new model. The results of the case study show that new schedule generated a significant cost reduction compared with the schedule without grouping pipes.
Resumo:
Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
Evidence currently supports the view that intentional interpersonal coordination (IIC) is a self-organizing phenomenon facilitated by visual perception of co-actors in a coordinative coupling (Schmidt, Richardson, Arsenault, & Galantucci, 2007). The present study examines how apparent IIC is achieved in situations where visual information is limited for co-actors in a rowing boat. In paired rowing boats only one of the actors, [bow seat] gets to see the actions of the other [stroke seat]. Thus IIC appears to be facilitated despite the lack of important visual information for the control of the dyad. Adopting a mimetic approach to expert coordination, the present study qualitatively examined the experiences of expert performers (N=9) and coaches (N=4) with respect to how IIC was achieved in paired rowing boats. Themes were explored using inductive content analysis, which led to layered model of control. Rowers and coaches reported the use of multiple perceptual sources in order to achieve IIC. As expected(Kelso, 1995; Schmidt & O’Brien, 1997; Turvey, 1990), rowers in the bow of a pair boat make use of visual information provided by the partner in front of them [stroke]. However, this perceptual information is subordinate to perception Motor Learning and Control S111 of the relationship between the boat hull and water passing beside it. Stroke seat, in the absence of visual information about his/her partner, achieves coordination by picking up information about the lifting or looming of the boat’s stern along with water passage past the hull. In this case it appears that apparent or desired IIC is supported by the perception of extra-personal variables, in this case boat behavior; as this perceptual information source is used by both actors. To conclude, co-actors in two person rowing boats use multiple sources of perceptual information for apparent IIC that changes according to task constraints. Where visual information is restricted IIC is facilitated via extra-personal perceptual information and apparent IIC switches to intentional extra-personal coordination.
Resumo:
Emergency service workers (e.g., fire-fighters, police and paramedics) are exposed to elevated levels of potentially traumatising events through the course of their work. Such exposure can have lasting negative consequences (e. g., Post Traumatic Stress Disorder; PTSD) and/or positive outcomes (e. g., Posttraumatic Growth; PTG). Research had implicated trauma, occupational and personal variables that account for variance in post-trauma outcomes yet at this stage no research has investigated these factors and their relative influence on both PTSD and PTG in a single study. Based in Calhoun and Tedeschi’s (2013) model of PTG and previous research, in this study regression models of PTG and PTSD symptoms among 218 fire-fighters were tested. Results indicated organisational factors predicted symptoms of PTSD, while there was partial support for the hypothesis that coping and social support would be predictors of PTG. Experiencing multiple sources of trauma, higher levels of organisational and operational stress, and utilising cognitive reappraisal coping, were all significant predictors of PTSD symptoms. Increases in PTG were predicted by experiencing trauma from multiple sources and the use of self-care coping. Results highlight the importance of organisational factors in the development of PTSD symptoms, and of individual factors for promoting PTG.
Resumo:
Countless studies have stressed the importance of social identity, particularly its role in various organizational outcomes, yet questions remain as to how identities initially develop, shift and change based on the configuration of multiple, pluralistic relationships grounded in an organizational setting. The interactive model of social identity formation has been proposed recently to explain the internalization of shared norms and values – critical in identity formation – has not received empirical examination. We analyzed multiple sources of data from nine nuclear professionals over three years to understand the construction of social identity in new entrants entering an organization. Informed by our data analyses, we found support for the interactive model and that age and level of experience influenced whether they undertook an inductive or deductive route of the group norm and value internalization. This study represents an important contribution to the study of social identity and the process by which identities are formed, particularly under conditions of duress or significant organizational disruption.
Resumo:
This thesis investigates condition monitoring (CM) of diesel engines using acoustic emission (AE) techniques. The AE signals recorded from a small size diesel engine are mixtures of multiple sources from multiple cylinders. Thus, it is difficult to interpret the information conveyed in the signals for CM purposes. This thesis develops a series of practical signal processing techniques to overcome this problem. Various experimental studies conducted to assess the CM capabilities of AE analysis for diesel engines. A series of modified signal processing techniques were proposed. These techniques showed promising results of capability for CM of multiple cylinders diesel engine using multiple AE sensors.
Resumo:
Network coding is a method for achieving channel capacity in networks. The key idea is to allow network routers to linearly mix packets as they traverse the network so that recipients receive linear combinations of packets. Network coded systems are vulnerable to pollution attacks where a single malicious node floods the network with bad packets and prevents the receiver from decoding correctly. Cryptographic defenses to these problems are based on homomorphic signatures and MACs. These proposals, however, cannot handle mixing of packets from multiple sources, which is needed to achieve the full benefits of network coding. In this paper we address integrity of multi-source mixing. We propose a security model for this setting and provide a generic construction.
Resumo:
In this paper, we identify two types of injustice as antecedents of abusive supervision and ultimately of subordinate psychological distress and insomnia. We examine distributive justice (an individual's evaluation of their input to output ratio compared to relevant others) and interactional injustice (the quality of interpersonal treatment received when procedures are implemented). Using a sample of Filipinos in a variety of occupations, we identify two types of injustice experienced by supervisors as stressors that provoke them to display abusive supervision to their subordinates. We examine two consequences of abusive supervision - subordinate psychological distress and insomnia. In addition, we identify two moderators of these relationships, namely, supervisor distress and subordinate self-esteem. We collected survey data from multiple sources including subordinates, their supervisors, and their partners. Data were obtained from 175 matched supervisor-subordinate dyads over a 6-month period, with subordinates' partners providing ratings of insomnia. Results of structural equation modelling analyses provided support for an indirect effects model in which supervisors' experience of unfair treatment cascades down the organization, resulting in subordinate psychological distress and, ultimately in their insomnia. In addition, results partially supported the proposed moderated relationships in the cascading model. © 2010 Taylor & Francis.
Resumo:
This study examines the process by which newly recruited nuclear engineering and technical staff came to understand, define, think, feel and behave within a distinct group that has a direct contribution to the organization's overall emphasis on a culture of reliability and system safety. In the field of organizational behavior the interactive model of social identity formation has been recently proposed to explain the process by which the internalization of shared norms and values occurs, an element critical in identity formation. Using this rich model of organizational behavior we analyzed multiple sources of data from nine new hires over a period of three years. This was done from the time they were employed to investigate the construction of social identity by new entrants entering into a complex organizational setting reflected in the context of a nuclear facility. Informed by our data analyses, we found support for the interactive model of social identity development and report the unexpected finding that a newly appointed member's age and level of experience appears to influence the manner in which they adapt, and assimilate into their surroundings. This study represents an important contribution to the safety and reliability literature as it provides a rich insight into the way newly recruited employees enact the process by which their identities are formed and hence act, particularly under conditions of duress or significant organizational disruption in complex organizational settings.
Resumo:
In recent years, rapid advances in information technology have led to various data collection systems which are enriching the sources of empirical data for use in transport systems. Currently, traffic data are collected through various sensors including loop detectors, probe vehicles, cell-phones, Bluetooth, video cameras, remote sensing and public transport smart cards. It has been argued that combining the complementary information from multiple sources will generally result in better accuracy, increased robustness and reduced ambiguity. Despite the fact that there have been substantial advances in data assimilation techniques to reconstruct and predict the traffic state from multiple data sources, such methods are generally data-driven and do not fully utilize the power of traffic models. Furthermore, the existing methods are still limited to freeway networks and are not yet applicable in the urban context due to the enhanced complexity of the flow behavior. The main traffic phenomena on urban links are generally caused by the boundary conditions at intersections, un-signalized or signalized, at which the switching of the traffic lights and the turning maneuvers of the road users lead to shock-wave phenomena that propagate upstream of the intersections. This paper develops a new model-based methodology to build up a real-time traffic prediction model for arterial corridors using data from multiple sources, particularly from loop detectors and partial observations from Bluetooth and GPS devices.
Resumo:
The recently discovered twist phase is studied in the context of the full ten-parameter family of partially coherent general anisotropic Gaussian Schell-model beams. It is shown that the nonnegativity requirement on the cross-spectral density of the beam demands that the strength of the twist phase be bounded from above by the inverse of the transverse coherence area of the beam. The twist phase as a two-point function is shown to have the structure of the generalized Huygens kernel or Green's function of a first-order system. The ray-transfer matrix of this system is exhibited. Wolf-type coherent-mode decomposition of the twist phase is carried out. Imposition of the twist phase on an otherwise untwisted beam is shown to result in a linear transformation in the ray phase space of the Wigner distribution. Though this transformation preserves the four-dimensional phase-space volume, it is not symplectic and hence it can, when impressed on a Wigner distribution, push it out of the convex set of all bona fide Wigner distributions unless the original Wigner distribution was sufficiently deep into the interior of the set.
Resumo:
The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.