891 resultados para synchronous HMM
Resumo:
This paper presents a novel evolutionary computation approach to three-dimensional path planning for unmanned aerial vehicles (UAVs) with tactical and kinematic constraints. A genetic algorithm (GA) is modified and extended for path planning. Two GAs are seeded at the initial and final positions with a common objective to minimise their distance apart under given UAV constraints. This is accomplished by the synchronous optimisation of subsequent control vectors. The proposed evolutionary computation approach is called synchronous genetic algorithm (SGA). The sequence of control vectors generated by the SGA constitutes to a near-optimal path plan. The resulting path plan exhibits no discontinuity when transitioning from curve to straight trajectories. Experiments and results show that the paths generated by the SGA are within 2% of the optimal solution. Such a path planner when implemented on a hardware accelerator, such as field programmable gate array chips, can be used in the UAV as on-board replanner, as well as in ground station systems for assisting in high precision planning and modelling of mission scenarios.
Resumo:
In the decision-making of multi-area ATC (Available Transfer Capacity) in electricity market environment, the existing resources of transmission network should be optimally dispatched and coordinately employed on the premise that the secure system operation is maintained and risk associated is controllable. The non-sequential Monte Carlo simulation is used to determine the ATC probability density distribution of specified areas under the influence of several uncertainty factors, based on which, a coordinated probabilistic optimal decision-making model with the maximal risk benefit as its objective is developed for multi-area ATC. The NSGA-II is applied to calculate the ATC of each area, which considers the risk cost caused by relevant uncertainty factors and the synchronous coordination among areas. The essential characteristics of the developed model and the employed algorithm are illustrated by the example of IEEE 118-bus test system. Simulative result shows that, the risk of multi-area ATC decision-making is influenced by the uncertainties in power system operation and the relative importance degrees of different areas.
Resumo:
Social media tools are often the result of innovations in Information Technology and developed by IT professionals and innovators. Nevertheless, IT professionals, many of whom are responsible for designing and building social media technologies, have not been investigated on how they themselves use or experience social media for professional purposes. This study will use Information Grounds Theory (Pettigrew, 1998) as a framework to study IT professionals’ experience in using social media for professional purposes. Information grounds facilitates the opportunistic discovery of information within social settings created temporarily at a place where people gather for a specific purpose (e.g., doctors’ waiting rooms, office tea rooms etc.), but the social atmosphere stimulates spontaneous sharing of information (Pettigrew, 1999). This study proposes that social media has the qualities that make it a rich information grounds; people participate from separate “places” in cyberspace in a synchronous manner in real-time, making it almost as dynamic and unplanned as physical information grounds. There is limited research on how social media platforms are perceived as a “place,” (a place to go to, a place to gather, or a place to be seen in) that is comparable to physical spaces. There is also no empirical study on how IT professionals use or “experience” social media. The data for this study is being collected through a study of IT professionals who currently use Twitter. A digital ethnography approach is being taken wherein the researcher uses online observations and “follows” the participants online and observes their behaviours and interactions on social media. Next, a sub-set of participants will be interviewed on their experiences with and within social media and how social media compares with traditional methods of information grounds, information communication, and collaborative environments. An Evolved Grounded Theory (Glaser, 1992) approach will be used to analyse tweets data and interviews and to map the findings against the Information Ground Theory. Findings from this study will provide foundational understanding of IT professionals’ experiences within social media, and can help both professionals and researchers understand this fast-evolving method of communications.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
The Australian region spans some 60° of latitude and 50° of longitude and displays considerable regional climate variability both today and during the Late Quaternary. A synthesis of marine and terrestrial climate records, combining findings from the Southern Ocean, temperate, tropical and arid zones, identifies a complex response of climate proxies to a background of changing boundary conditions over the last 35,000 years. Climate drivers include the seasonal timing of insolation, greenhouse gas content of the atmosphere, sea level rise and ocean and atmospheric circulation changes. Our compilation finds few climatic events that could be used to construct a climate event stratigraphy for the entire region, limiting the usefulness of this approach. Instead we have taken a spatial approach, looking to discern the patterns of change across the continent. The data identify the clearest and most synchronous climatic response at the time of the Last Glacial Maximum (LGM) (21 ± 3 ka), with unambiguous cooling recorded in the ocean, and evidence of glaciation in the highlands of tropical New Guinea, southeast Australia and Tasmania. Many terrestrial records suggest drier conditions, but with the timing of inferred snowmelt, and changes to the rainfall/runoff relationships, driving higher river discharge at the LGM. In contrast, the deglaciation is a time of considerable south-east to north-west variation across the region. Warming was underway in all regions by 17 ka. Post-glacial sea level rise and its associated regional impacts have played an important role in determining the magnitude and timing of climate response in the north-west of the continent in contrast to the southern latitudes. No evidence for cooling during the Younger Dryas chronozone is evident in the region, but the Antarctic cold reversal clearly occurs south of Australia. The Holocene period is a time of considerable climate variability associated with an intense monsoon in the tropics early in the Holocene, giving way to a weakened monsoon and an increasingly El Niño-dominated ENSO to the present. The influence of ENSO is evident throughout the southeast of Australia, but not the southwest. This climate history provides a template from which to assess the regionality of climate events across Australia and make comparisons beyond our region. The data identify the clearest and most synchronous climatic response at the time of the Last Glacial Maximum (LGM) (21 ± 3 ka), with unambiguous cooling recorded in the ocean, and evidence of glaciation in the highlands of tropical New Guinea, southeast Australia and Tasmania. Many terrestrial records suggest drier conditions, but with the timing of inferred snowmelt, and changes to the rainfall/runoff relationships, driving higher river discharge at the LGM. In contrast, the deglaciation is a time of considerable south-east to north-west variation across the region. Warming was underway in all regions by 17 ka. Post-glacial sea level rise and its associated regional impacts have played an important role in determining the magnitude and timing of climate response in the north-west of the continent in contrast to the southern latitudes. No evidence for cooling during the Younger Dryas chronozone is evident in the region, but the Antarctic cold reversal clearly occurs south of Australia. The Holocene period is a time of considerable climate variability associated with an intense monsoon in the tropics early in the Holocene, giving way to a weakened monsoon and an increasingly El Niño-dominated ENSO to the present. The influence of ENSO is evident throughout the southeast of Australia, but not the southwest. This climate history provides a template from which to assess the regionality of climate events across Australia and make comparisons beyond our region.
Resumo:
We employed a Hidden-Markov-Model (HMM) algorithm in loss of heterozygosity (LOH) analysis of high-density single nucleotide polymorphism (SNP) array data from Non-Hodgkin’s lymphoma (NHL) entities, follicular lymphoma (FL), and diffuse large B-cell lymphoma (DLBCL). This revealed a high frequency of LOH over the chromosomal region 11p11.2, containing the gene encoding the protein tyrosine phosphatase receptor type J (PTPRJ). Although PTPRJ regulates components of key survival pathways in B-cells (i.e., BCR, MAPK, and PI3K signaling), its role in B-cell development is poorly understood. LOH of PTPRJ has been described in several types of cancer but not in any hematological malignancy. Interestingly, FL cases with LOH exhibited down-regulation of PTPRJ, in contrast no significant variation of expression was shown in DLBCLs. In addition, sequence screening in Exons 5 and 13 of PTPRJ identified the G973A (rs2270993), T1054C (rs2270992), A1182C (rs1566734), and G2971C (rs4752904) coding SNPs (cSNPs). The A1182 allele was significantly more frequent in FLs and in NHLs with LOH. Significant over-representation of the C1054 (rs2270992) and the C2971 (rs4752904) alleles were also observed in LOH cases. A haplotype analysis also revealed a significant lower frequency of haplotype GTCG in NHL cases, but it was only detected in cases with retention. Conversely, haplotype GCAC was over-representated in cases with LOH. Altogether, these results indicate that the inactivation of PTPRJ may be a common lymphomagenic mechanism in these NHL subtypes and that haplotypes in PTPRJ gene may play a role in susceptibility to NHL, by affecting activation of PTPRJ in these B-cell lymphomas.
Resumo:
Background Loss of heterozygosity (LOH) is an important marker for one of the 'two-hits' required for tumor suppressor gene inactivation. Traditional methods for mapping LOH regions require the comparison of both tumor and patient-matched normal DNA samples. However, for many archival samples, patient-matched normal DNA is not available leading to the under-utilization of this important resource in LOH studies. Here we describe a new method for LOH analysis that relies on the genome-wide comparison of heterozygosity of single nucleotide polymorphisms (SNPs) between cohorts of cases and un-matched healthy control samples. Regions of LOH are defined by consistent decreases in heterozygosity across a genetic region in the case cohort compared to the control cohort. Methods DNA was collected from 20 Follicular Lymphoma (FL) tumor samples, 20 Diffuse Large B-cell Lymphoma (DLBCL) tumor samples, neoplastic B-cells of 10 B-cell Chronic Lymphocytic Leukemia (B-CLL) patients and Buccal cell samples matched to 4 of these B-CLL patients. The cohort heterozygosity comparison method was developed and validated using LOH derived in a small cohort of B-CLL by traditional comparisons of tumor and normal DNA samples, and compared to the only alternative method for LOH analysis without patient matched controls. LOH candidate regions were then generated for enlarged cohorts of B-CLL, FL and DLBCL samples using our cohort heterozygosity comparison method in order to evaluate potential LOH candidate regions in these non-Hodgkin's lymphoma tumor subtypes. Results Using a small cohort of B-CLL samples with patient-matched normal DNA we have validated the utility of this method and shown that it displays more accuracy and sensitivity in detecting LOH candidate regions compared to the only alternative method, the Hidden Markov Model (HMM) method. Subsequently, using B-CLL, FL and DLBCL tumor samples we have utilised cohort heterozygosity comparisons to localise LOH candidate regions in these subtypes of non-Hodgkin's lymphoma. Detected LOH regions included both previously described regions of LOH as well as novel genomic candidate regions. Conclusions We have proven the efficacy of the use of cohort heterozygosity comparisons for genome-wide mapping of LOH and shown it to be in many ways superior to the HMM method. Additionally, the use of this method to analyse SNP microarray data from 3 common forms of non-Hodgkin's lymphoma yielded interesting tumor suppressor gene candidates, including the ETV3 gene that was highlighted in both B-CLL and FL.
Resumo:
Collisions between pedestrians and vehicles continue to be a major problem throughout the world. Pedestrians trying to cross roads and railway tracks without any caution are often highly susceptible to collisions with vehicles and trains. Continuous financial, human and other losses have prompted transport related organizations to come up with various solutions addressing this issue. However, the quest for new and significant improvements in this area is still ongoing. This work addresses this issue by building a general framework using computer vision techniques to automatically monitor pedestrian movements in such high-risk areas to enable better analysis of activity, and the creation of future alerting strategies. As a result of rapid development in the electronics and semi-conductor industry there is extensive deployment of CCTV cameras in public places to capture video footage. This footage can then be used to analyse crowd activities in those particular places. This work seeks to identify the abnormal behaviour of individuals in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM), Full-2D HMM and Spatial HMM to model the normal activities of people. The outliers of the model (i.e. those observations with insufficient likelihood) are identified as abnormal activities. Location features, flow features and optical flow textures are used as the features for the model. The proposed approaches are evaluated using the publicly available UCSD datasets, and we demonstrate improved performance using a Semi-2D Hidden Markov Model compared to other state of the art methods. Further we illustrate how our proposed methods can be applied to detect anomalous events at rail level crossings.
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
The auxiliary load DC-DC converters of the Sunshark solar car have never been examined. An analysis of the current design reveals it is complicated, and inefficient. Some simple measures to greatly improve the efficiency are present which will achieve an overall worthwhile power saving. Two switch-mode power supply DC-DC converter designs are presented. One is a constant current supply for the LED brake and turn indicators, which allows them to be powered directly from the main DC bus, and switched only as necessary. The second is a low power flyback converter, which employs synchronous rectification among other techniques to achieve good efficiency and regulation over a large range of output powers. Practical results from both converters, and an indication of the overall improvement in system efficiency will be offered.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
In a large interconnected power system, disturbances initiated by a fault or other events cause acceleration in the generator rotors with respect to their synchronous reference frame. This acceleration of rotors can be described by two different dynamic phenomena, as shown in existing literature. One of the phenomena is simultaneous acceleration and the other is electromechanical wave propagation, which is characterized by travelling waves in terms of a wave equation. This paper demonstrates that depending on the structure of the system, the exhibited dynamic response will be dominated by one phenomenon or the other or a mixture of both. Two system structures of choice are examined, with each structure exemplifying each phenomenon present to different degrees in their dynamic responses. Prediction of dominance of either dynamic phenomenon in a particular system can be determined by taking into account the relative sizes of the values of its reduced admittance matrix.
Resumo:
Diagnostics of rolling element bearings involves a combination of different techniques of signal enhancing and analysis. The most common procedure presents a first step of order tracking and synchronous averaging, able to remove the undesired components, synchronous with the shaft harmonics, from the signal, and a final step of envelope analysis to obtain the squared envelope spectrum. This indicator has been studied thoroughly, and statistically based criteria have been obtained, in order to identify damaged bearings. The statistical thresholds are valid only if all the deterministic components in the signal have been removed. Unfortunately, in various industrial applications, characterized by heterogeneous vibration sources, the first step of synchronous averaging is not sufficient to eliminate completely the deterministic components and an additional step of pre-whitening is needed before the envelope analysis. Different techniques have been proposed in the past with this aim: The most widely spread are linear prediction filters and spectral kurtosis. Recently, a new technique for pre-whitening has been proposed, based on cepstral analysis: the so-called cepstrum pre-whitening. Owing to its low computational requirements and its simplicity, it seems a good candidate to perform the intermediate pre-whitening step in an automatic damage recognition algorithm. In this paper, the effectiveness of the new technique will be tested on the data measured on a full-scale industrial bearing test-rig, able to reproduce the harsh conditions of operation. A benchmark comparison with the traditional pre-whitening techniques will be made, as a final step for the verification of the potentiality of the cepstrum pre-whitening.
Resumo:
The transmission path from the excitation to the measured vibration on the surface of a mechanical system introduces a distortion both in amplitude and in phase. Moreover, in variable speed conditions, the amplification/attenuation and the phase shift, due to the transfer function of the mechanical system, varies in time. This phenomenon reduces the effectiveness of the traditionally tachometer based order tracking, compromising the results of a discrete-random separation performed by a synchronous averaging. In this paper, for the first time, the extent of the distortion is identified both in the time domain and in the order spectrum of the signal, highlighting the consequences for the diagnostics of rotating machinery. A particular focus is given to gears, providing some indications on how to take advantage of the quantification of the disturbance to better tune the techniques developed for the compensation of the distortion. The full theoretical analysis is presented and the results are applied to an experimental case.
Resumo:
Integration of small-scale electricity generators, known as Distributed Generation (DG), into the distribution networks has become increasingly popular at the present. This tendency together with the falling price of synchronous-type generator has potential to give the DG a better chance in participating in the voltage regulation process together with other devices already available in the system. The voltage control issue turns out to be a very challenging problem for the distribution engineers since existing control coordination schemes would need to be reconsidered to take into account the DG operation. In this paper, we propose a control coordination technique, which is able to utilize the ability of the DG as a voltage regulator, and at the same time minimizes interaction with other active devices, such as On-load Tap Changing Transformer (OLTC) and voltage regulator. The technique has been developed based on the concept of control zone, Line Drop Compensation (LDC), as well as the choice of controller's parameters. Simulations carried out on an Australian system show that the technique is suitable and flexible for any system with multiple regulating devices including DG.