389 resultados para distributed combination of classifiers
Resumo:
Diagnostics is based on the characterization of mechanical system condition and allows early detection of a possible fault. Signal processing is an approach widely used in diagnostics, since it allows directly characterizing the state of the system. Several types of advanced signal processing techniques have been proposed in the last decades and added to more conventional ones. Seldom, these techniques are able to consider non-stationary operations. Diagnostics of roller bearings is not an exception of this framework. In this paper, a new vibration signal processing tool, able to perform roller bearing diagnostics in whatever working condition and noise level, is developed on the basis of two data-adaptive techniques as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED), coupled by means of the mathematics related to the Hilbert transform. The effectiveness of the new signal processing tool is proven by means of experimental data measured in a test-rig that employs high power industrial size components.
Resumo:
Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.
Resumo:
The combination of thermally- and photochemically-induced polymerization using light sensitive alkoxyamines was investigated. The thermally driven polymerizations were performed via the cleavage of the alkoxyamine functionality, whereas the photochemically-induced polymerizations were carried out either by nitroxide mediated photo-polymerization (NMP2) or by a classical type II mechanism, depending on the structure of the light-sensitive alkoxyamine employed. Once the potential of the various structures as initiators of thermally- and photo-induced polymerizations was established, their use in combination for block copolymer syntheses was investigated. With each alkoxyamine investigated, block copolymers were successfully obtained and the system was applied to the post-modification of polymer coatings for application in patterning and photografting.
Resumo:
The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.
Resumo:
An intrinsic challenge associated with evaluating proposed techniques for detecting Distributed Denial-of-Service (DDoS) attacks and distinguishing them from Flash Events (FEs) is the extreme scarcity of publicly available real-word traffic traces. Those available are either heavily anonymised or too old to accurately reflect the current trends in DDoS attacks and FEs. This paper proposes a traffic generation and testbed framework for synthetically generating different types of realistic DDoS attacks, FEs and other benign traffic traces, and monitoring their effects on the target. Using only modest hardware resources, the proposed framework, consisting of a customised software traffic generator, ‘Botloader’, is capable of generating a configurable mix of two-way traffic, for emulating either large-scale DDoS attacks, FEs or benign traffic traces that are experimentally reproducible. Botloader uses IP-aliasing, a well-known technique available on most computing platforms, to create thousands of interactive UDP/TCP endpoints on a single computer, each bound to a unique IP-address, to emulate large numbers of simultaneous attackers or benign clients.
Resumo:
This research studied distributed computing of all-to-all comparison problems with big data sets. The thesis formalised the problem, and developed a high-performance and scalable computing framework with a programming model, data distribution strategies and task scheduling policies to solve the problem. The study considered storage usage, data locality and load balancing for performance improvement in solving the problem. The research outcomes can be applied in bioinformatics, biometrics and data mining and other domains in which all-to-all comparisons are a typical computing pattern.
A combination of local inflammation and central memory T cells potentiates immunotherapy in the skin
Resumo:
Adoptive T cell therapy uses the specificity of the adaptive immune system to target cancer and virally infected cells. Yet the mechanism and means by which to enhance T cell function are incompletely described, especially in the skin. In this study, we use a murine model of immunotherapy to optimize cell-mediated immunity in the skin. We show that in vitro - derived central but not effector memory-like T cells bring about rapid regression of skin-expressing cognate Ag as a transgene in keratinocytes. Local inflammation induced by the TLR7 receptor agonist imiquimod subtly yet reproducibly decreases time to skin graft rejection elicited by central but not effector memory T cells in an immunodeficient mouse model. Local CCL4, a chemokine liberated by TLR7 agonism, similarly enhances central memory T cell function. In this model, IL-2 facilitates the development in vivo of effector function from central memory but not effector memory T cells. In a model of T cell tolerogenesis, we further show that adoptively transferred central but not effector memory T cells can give rise to successful cutaneous immunity, which is dependent on a local inflammatory cue in the target tissue at the time of adoptive T cell transfer. Thus, adoptive T cell therapy efficacy can be enhanced if CD8+ T cells with a central memory T cell phenotype are transferred, and IL-2 is present with contemporaneous local inflammation. Copyright © 2012 by The American Association of Immunologists, Inc.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
Principal Topic A small firm is unlikely to possess internally the full range of knowledge and skills that it requires or could benefit from for the development of its business. The ability to acquire suitable external expertise - defined as knowledge or competence that is rare in the firm and acquired from the outside - when needed thus becomes a competitive factor in itself. Access to external expertise enables the firm to focus on its core competencies and removes the necessity to internalize every skill and competence. However, research on how small firms access external expertise is still scarce. The present study contributes to this under-developed discussion by analysing the role of trust and strong ties in the small firm's selection and evaluation of sources of external expertise (henceforth referred to as the 'business advisor' or 'advisor'). Granovetter (1973, 1361) defines the strength of a network tie as 'a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding) and the reciprocal services which characterize the tie'. Strong ties in the context of the present investigation refer to sources of external expertise who are well known to the owner-manager, and who may be either informal (e.g., family, friends) or professional advisors (e.g., consultants, enterprise support officers, accountants or solicitors). Previous research has suggested that strong and weak ties have different fortes and the choice of business advisors could thus be critical to business performance) While previous research results suggest that small businesses favour previously well known business advisors, prior studies have also pointed out that an excessive reliance on a network of well known actors might hamper business development, as the range of expertise available through strong ties is limited. But are owner-managers of small businesses aware of this limitation and does it matter to them? Or does working with a well-known advisor compensate for it? Hence, our research model first examines the impact of the strength of tie on the business advisor's perceived performance. Next, we ask what encourages a small business owner-manager to seek advice from a strong tie. A recent exploratory study by Welter and Kautonen (2005) drew attention to the central role of trust in this context. However, while their study found support for the general proposition that trust plays an important role in the choice of advisors, how trust and its different dimensions actually affect this choice remained ambiguous. The present paper develops this discussion by considering the impact of the different dimensions of perceived trustworthiness, defined as benevolence, integrity and ability, on the strength of tie. Further, we suggest that the dimensions of perceived trustworthiness relevant in the choice of a strong tie vary between professional and informal advisors. Methodology/Key Propositions Our propositions are examined empirically based on survey data comprising 153 Finnish small businesses. The data are analysed utilizing the partial least squares (PLS) approach to structural equation modelling with SmartPLS 2.0. Being non-parametric, the PLS algorithm is particularly well-suited to analysing small datasets with non-normally distributed variables. Results and Implications The path model shows that the stronger the tie, the more positively the advisor's performance is perceived. Hypothesis 1, that strong ties will be associated with higher perceptions of performance is clearly supported. Benevolence is clearly the most significant predictor of the choice of a strong tie for external expertise. While ability also reaches a moderate level of statistical significance, integrity does not have a statistically significant impact on the choice of a strong tie. Hence, we found support for two out of three independent variables included in Hypothesis 2. Path coefficients differed between the professional and informal advisor subsamples. The results of the exploratory group comparison show that Hypothesis 3a regarding ability being associated with strong ties more pronouncedly when choosing a professional advisor was not supported. Hypothesis 3b arguing that benevolence is more strongly associated with strong ties in the context of choosing an informal advisor received some support because the path coefficient in the informal advisor subsample was much larger than in the professional advisor subsample. Hypothesis 3c postulating that integrity would be more strongly associated with strong ties in the choice of a professional advisor was supported. Integrity is the most important dimension of trustworthiness in this context. However, integrity is of no concern, or even negative, when using strong ties to choose an informal advisor. The findings of this study have practical relevance to the enterprise support community. First of all, given that the strength of tie has a significant positive impact on the advisor's perceived performance, this implies that small business owners appreciate working with advisors in long-term relationships. Therefore, advisors are well advised to invest into relationship building and maintenance in their work with small firms. Secondly, the results show that, especially in the context of professional advisors, the advisor's perceived integrity and benevolence weigh more than ability. This again emphasizes the need to invest time and effort into building a personal relationship with the owner-manager, rather than merely maintaining a professional image and credentials. Finally, this study demonstrates that the dimensions of perceived trustworthiness are orthogonal with different effects on the strength of tie and ultimately perceived performance. This means that entrepreneurs and advisors should consider the specific dimensions of ability, benevolence and integrity, rather than rely on general perceptions of trustworthiness in their advice relationships.
Resumo:
Forecasting volatility has received a great deal of research attention, with the relative performances of econometric model based and option implied volatility forecasts often being considered. While many studies find that implied volatility is the pre-ferred approach, a number of issues remain unresolved, including the relative merit of combining forecasts and whether the relative performances of various forecasts are statistically different. By utilising recent econometric advances, this paper considers whether combination forecasts of S&P 500 volatility are statistically superior to a wide range of model based forecasts and implied volatility. It is found that a combination of model based forecasts is the dominant approach, indicating that the implied volatility cannot simply be viewed as a combination of various model based forecasts. Therefore, while often viewed as a superior volatility forecast, the implied volatility is in fact an inferior forecast of S&P 500 volatility relative to model-based forecasts.
Resumo:
In this paper, the placement and sizing of Distributed Generators (DG) in distribution networks are determined optimally. The objective is to minimize the loss and to improve the reliability. The constraints are the bus voltage, feeder current and the reactive power flowing back to the source side. The placement and size of DGs are optimized using a combination of Discrete Particle Swarm Optimization (DPSO) and Genetic Algorithm (GA). This increases the diversity of the optimizing variables in DPSO not to be stuck in the local minima. To evaluate the proposed algorithm, the semi-urban 37-bus distribution system connected at bus 2 of the Roy Billinton Test System (RBTS), which is located at the secondary side of a 33/11 kV distribution substation, is used. The results finally illustrate the efficiency of the proposed method.
Resumo:
This paper shows how the power quality can be improved in a microgrid that is supplying a nonlinear and unbalanced load. The microgrid contains a hybrid combination of inertial and converter interfaced distributed generation units where a decentralized power sharing algorithm is used to control its power management. One of the distributed generators in the microgrid is used as a power quality compensator for the unbalanced and harmonic load. The current reference generation for power quality improvement takes into account the active and reactive power to be supplied by the micro source which is connected to the compensator. Depending on the power requirement of the nonlinear load, the proposed control scheme can change modes of operation without any external communication interfaces. The compensator can operate in two modes depending on the entire power demand of the unbalanced nonlinear load. The proposed control scheme can even compensate system unbalance caused by the single-phase micro sources and load changes. The efficacy of the proposed power quality improvement control and method in such a microgrid is validated through extensive simulation studies using PSCAD/EMTDC software with detailed dynamic models of the micro sources and power electronic converters
Resumo:
Problem: Innate immune activation of human cells, for some intracellular pathogens, is advantageous for vacuole morphology and pathogenic viability. It is unknown whether innate immune activation is advantageous to Chlamydia trachomatis viability. ----- ----- Method of study: Innate immune activation of HEp-2 cells during Chlamydia infection was conducted using lipopolysaccharide (LPS), polyI:C, and wedelolactone (innate immune inhibitor) to investigate the impact of these conditions on viability of Chlamydia. ----- ----- Results: The addition of LPS and polyI:C to stimulate activation of the two distinct innate immune pathways (nuclear factor kappa beta and interferon regulatory factor) had no impact on the viability of Chlamydia. However, when compounds targeting either pathway were added in combination with the specific innate immune inhibitor (wedelolactone) a major impact on Chlamydia viability was observed. This impact was found to be due to the induction of apoptosis of the HEp-2 cells under these conditions. ----- ----- Conclusion: This is the first time that induction of apoptosis has been reported in C. trachomatis-infected cells when treated with a combination of innate immune activators and wedelolactone.
Resumo:
Bioinformatics is dominated by online databases and sophisticated web-accessible tools. As such, it is ideally placed to benefit from the rapid, purpose specific combination of services achievable via web mashups. The recent introduction of a number of sophisticated frameworks has greatly simplified the mashup creation process, making them accessible to scientists with limited programming expertise. In this paper we investigate the feasibility of mashups as a new approach to bioinformatic experimentation, focusing on an exploratory niche between interactive web usage and robust workflows, and attempting to identify the range of computations for which mashups may be employed. While we treat each of the major frameworks, we illustrate the ideas with a series of examples developed under the Popfly framework
Resumo:
Research found that today’s organisations are increasingly aware of the potential barriers and perceived challenges associated with the successful delivery of change — including cultural and sub-cultural indifferences; financial constraints; restricted timelines; insufficient senior management support; fragmented key stakeholder commitment; and inadequate training. The delivery and application of Innovative Change (see glossary) within a construction industry organisation tends to require a certain level of ‘readiness’. This readiness is the combination of an organisation’s ability to part from undertakings that may be old, traditional, or inefficient; and then being able to readily adopt a procedure or initiative which is new, improved, or more efficient. Despite the construction industry’s awareness of the various threats and opportunities associated with the delivery of change, research found little attention is currently given to develop a ‘decision-making framework’ that comprises measurable elements (dynamics) that may assist in more accurately determining an organisation’s level of readiness or ability to deliver innovative change. To resolve this, an initial Background Literature Review in 2004 identified six such dynamics, those of Change, Innovation, Implementation, Culture, Leadership, and Training and Education, which were then hypothesised to be key components of a ‘Conceptual Decision-making Framework’ (CDF) for delivering innovative change within an organisation. To support this hypothesis, a second (more extensive) Literature Review was undertaken from late 2007 to mid 2009. A Delphi study was embarked on in June 2008, inviting fifteen building and construction industry members to form a panel and take part in a Delphi study. The selection criterion required panel members to have senior positions (manager and above) within a recognised field or occupation, and to have experience, understanding and / or knowledge in the process of delivering change within organisations. The final panel comprised nine representatives from private and public industry organisations and tertiary / research and development (R&D) universities. The Delphi study developed, distributed and collated two rounds of survey questionnaires over a four-month period, comprising open-ended and closed questions (referred to as factors). The first round of Delphi survey questionnaires were distributed to the panel in August 2008, asking them to rate the relevancy of the six hypothesised dynamics. In early September 2008, round-one responses were returned, analysed and documented. From this, an additional three dynamics were identified and confirmed by the panel as being highly relevant during the decision-making process when delivering innovative change within an organisation. The additional dynamics (‘Knowledge-sharing and Management’; ‘Business Process Requirements’; and ‘Life-cycle Costs’) were then added to the first six dynamics and used to populate the second (final) Delphi survey questionnaire. This was distributed to the same nine panel members in October 2008, this time asking them to rate the relevancy of all nine dynamics. In November 2008, round-two responses were returned, analysed, summarised and documented. Final results confirmed stability in responses and met Delphi study guidelines. The final contribution is twofold. Firstly, findings confirm all nine dynamics as key components of the proposed CDF for delivering innovative change within an organisation. Secondly, the future development and testing of an ‘Innovative Change Delivery Process’ (ICDP) is proposed, one that is underpinned by an ‘Innovative Change Decision-making Framework’ (ICDF), an ‘Innovative Change Delivery Analysis’ (ICDA) program, and an ‘Innovative Change Delivery Guide’ (ICDG).