921 resultados para conventional electrocardiography


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Planar magnetic elements are becoming a replacement for their conventional rivals. Among the reasons supporting their application, is their smaller size. Taking less bulk in the electronic package is a critical advantage from the manufacturing point of view. The planar structure consists of the PCB copper tracks to generate the desired windings .The windings on each PCB layer could be connected in various ways to other winding layers to produce a series or parallel connection. These windings could be applied coreless or with a core depending on the application in Switched Mode Power Supplies (SMPS). Planar shapes of the tracks increase the effective conduction area in the windings, brings about more inductance compared to the conventional windings with the similar copper loss case. The problem arising from the planar structure of magnetic inductors is the leakage current between the layers generated by a pulse width modulated voltage across the inductor. This current value relies on the capacitive coupling between the layers, which in its turn depends on the physical parameters of the planar scheme. In order to reduce this electrical power dissipation due to the leakage current and Electromagnetic Interference (EMI), reconsideration in the planar structure might be effective. The aim of this research is to address problem of these capacitive coupling in planar layers and to find out a better structure for the planar inductance which offers less total capacitive coupling and thus less thermal dissipation from the leakage currents. Through Finite Element methods (FEM) several simulations have been carried out for various planar structures. The labs prototypes of these structures are built with the similar specification of the simulation cases. The capacitive couplings of the samples are determined with Spectrum Analyser whereby the test analysis verified the simulation results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Activated protein C resistance (APCR), the most common risk factor for venous thrombosis, is the result of a G to A base substitution at nucleotide 1691 (R506Q) in the factor V gene. Current techniques to detect the factor V Leiden mutation, such as determination of restriction length polymorphisms, do not have the capacity to screen large numbers of samples in a rapid, cost- effective test. The aim of this study was to apply the first nucleotide change (FNC) technology, to the detection of the factor V Leiden mutation. After preliminary amplification of genomic DNA by polymerase chain reaction (PCR), an allele-specific primer was hybridised to the PCR product and extended using fluorescent terminating dideoxynucleotides which were detected by colorimetric assay. Using this ELISA-based assay, the prevalence of the factor V Leiden mutation was determined in an Australian blood donor population (n = 500). A total of 18 heterozygotes were identified (3.6%) and all of these were confirmed with conventional MnlI restriction digest. No homozygotes for the variant allele were detected. We conclude from this study that the frequency of 3.6% is compatible with others published for Caucasian populations. In addition, the FNC technology shows promise as the basis for a rapid, automated DNA based test for factor V Leiden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decoupling networks can alleviate the effects of mutual coupling in antenna arrays. Conventional decoupling networks can provide decoupled and matched ports at a single frequency. This paper describes dual-frequency decoupling which is achieved by using a network of series or parallel resonant circuits instead of single reactive elements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the factors that drive high levels of corporate sustainability performance (CSP), as proxied by membership of the Dow Jones Sustainability World Index. Using a stakeholder framework, we examine the incentives for US firms to invest in sustainability principles and develop a number of hypotheses that relate CSP to firm-specific characteristics. Our results indicate that leading CSP firms are significantly larger, have higher levels of growth and a higher return on equity than conventional firms. Contrary to our predictions, leading CSP firms do not have greater free cash flows or lower leverage than other firms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to increasing recognition by industry that partnerships with universities can lead to more effective knowledge and skills acquisition and deployment, corporate learning programmes are currently experiencing a resurgence of interest. Rethinking of corporations’ approaches to what has traditionally been classed as ‘training’ has resulted in a new focus on learning and the adoption of philosophies that underlie the academic paradigm. This paper reports on two studies of collaboration between major international engineering corporations and an Australian university, the aim of which was to up-skill the workforce in response to changing markets. The paper highlights the differences between the models of learning adopted in such collaboration and those in more conventional, university-based environments. The learning programmes combine the ADDIE (analysis, design, develop, implement and evaluate) development and workplace learning models. Adaptations that have added value for industry partners and recommendations as to how these can be evolved to cope with change are discussed. The learning is contextualised by industry- based subject matter experts working in close collaboration with university experts and learning designers to develop programmes that are reflective of current and future needs in the organisation. Results derived from user feedback indicate that the learning programmes are effectively aligned with the needs of the industry partners whilst simultaneously upholding academic ideals. In other words, it is possible to combine academic and more traditional approaches to develop corporate learning programmes that satisfy requirements in the workplace. Emerging from the study, a new conceptual framework for the development of corporate learning is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The historical context surrounding Bruno Taut's Glashaus has been established through work of authors like Reyner Banham, Dennis Sharp and Ian Boyd-Whyte. However, these English language translations, are mostly derived from secondary sources such as Adolf Behne and Paul Scheerbart. Surprisingly, Taut's own writings largely do not feature in this prevailing account of his work. Since 1990, strong doubts have arisen about this conventional picture of Taut's Glashaus. Manfred spiedel, for instance, minimizes Paul Scheerbart's contribution to the design by arguing that Scheerbart met Taut only a few months before the construction of the Glashaus, that is, after Taut had finished his preliminary sketches. Kurt Junghanns goes further and asserts that the Glashaus design was complete beefore Taut and Scheerbart ever met. In 2005, Kai Gutschow published The Culture of Criticism: Adolf Behne and the Development of Modern Architecture in Germany, 1910 - 1914. Most startling, Gutschow asserts that Behne acts as the propagandist for the Glashaus by fabricating its link with Expressionism. This is particularly troubling because nobody contributed more to establishing the link between the Glashaus, Bruno Taut and Expressionism than Behne. As a result of this new evidence, this paper concurs that the established historical understanding of the Glashaus is flawed. By returning to Taut's own writings, a reinterpretation can be offered that strongly links the Glashaus to the Victoria regia lily and Strasbourg Cathedral. The significance of this approach is that it re-establishes Taut's own rational behind the design of the Glashaus, and thus contributes to the re-evaluation of the generally accepted histories of the Modern movement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given the recent emergence of the smart grid and smart grid related technologies, their security is a prime concern. Intrusion detection provides a second line of defense. However, conventional intrusion detection systems (IDSs) are unable to adequately address the unique requirements of the smart grid. This paper presents a gap analysis of contemporary IDSs from a smart grid perspective. This paper highlights the lack of adequate intrusion detection within the smart grid and discusses the limitations of current IDSs approaches. The gap analysis identifies current IDSs as being unsuited to smart grid application without significant changes to address smart grid specific requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this preliminary study was to determine the relevance of the categorization of the load regime data to assess the functional output and usage of the prosthesis of lower limb amputees. The objectives were a) to introduce a categorization of load regime, b) to present some descriptors of each activity, and c) to report the results for a case. The load applied on the osseointegrated fixation of one transfemoral amputee was recorded using a portable kinetic system for 5 hours. The periods of directional locomotion, localized locomotion, and stationary loading occurred 44%, 34%, and 22% of recording time and each accounted for 51%, 38%, and 12% of the duration of the periods of activity, respectively. The absolute maximum force during directional locomotion, localized locomotion, and stationary loading was 19%, 15%, and 8% of the body weight on the anteroposterior axis, 20%, 19%, and 12% on the mediolateral axis, and 121%, 106%, and 99% on the long axis. A total of 2,783 gait cycles were recorded. Approximately 10% more gait cycles and 50% more of the total impulse than conventional analyses were identified. The proposed categorization and apparatus have the potential to complement conventional instruments, particularly for difficult cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender’s strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker’s incentives and knowledge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the effect of electric field enhancement on Pt/nanostructured ZnO Schottky diode based hydrogen sensors under reverse bias condition has been investigated. Current-voltage characteristics of these diodes have been studied at temperatures from 25 to 620 °C and their free carrier density concentration was estimated by exposing the sensors to hydrogen gas. The experimental results show a significantly lower breakdown voltage in reversed bias current-voltage characteristics than the conventional Schottky diodes and also greater lateral voltage shift in reverse bias operation than the forward bias. This can be ascribed to the increased localized electric fields emanating from the sharp edges and corners of the nanostructured morphologies. At 620 °C, voltage shifts of 114 and 325 mV for 0.06% and 1% hydrogen have been recorded from dynamic response under the reverse bias condition. © 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Patients with metastatic melanoma or multiple myeloma have a dismal prognosis because these aggressive malignancies resist conventional treatment. A promising new oncologic approach uses molecularly targeted therapeutics that overcomes apoptotic resistance and, at the same time, achieves tumor selectivity. The unexpected selectivity of proteasome inhibition for inducing apoptosis in cancer cells, but not in normal cells, prompted us to define the mechanism of action for this class of drugs, including Food and Drug Administration-approved bortezomib. In this report, five melanoma cell lines and a myeloma cell line are treated with three different proteasome inhibitors (MG-132, lactacystin, and bortezomib), and the mechanism underlying the apoptotic pathway is defined. Following exposure to proteasome inhibitors, effective killing of human melanoma and myeloma cells, but not of normal proliferating melanocytes, was shown to involve p53-independent induction of the BH3-only protein NOXA. Induction of NOXA at the protein level was preceded by enhanced transcription of NOXA mRNA. Engagement of mitochondrial-based apoptotic pathway involved release of cytochrome c, second mitochondria-derived activator of caspases, and apoptosis-inducing factor, accompanied by a proteolytic cascade with processing of caspases 9, 3, and 8 and poly(ADP)-ribose polymerase. Blocking NOXA induction using an antisense (but not control) oligonucleotide reduced the apoptotic response by 30% to 50%, indicating a NOXA-dependent component in the overall killing of melanoma cells. These results provide a novel mechanism for overcoming the apoptotic resistance of tumor cells, and validate agents triggering NOXA induction as potential selective cancer therapeutics for life-threatening malignancies such as melanoma and multiple myeloma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A diagnostic method based on Bayesian Networks (probabilistic graphical models) is presented. Unlike conventional diagnostic approaches, in this method instead of focusing on system residuals at one or a few operating points, diagnosis is done by analyzing system behavior patterns over a window of operation. It is shown how this approach can loosen the dependency of diagnostic methods on precise system modeling while maintaining the desired characteristics of fault detection and diagnosis (FDD) tools (fault isolation, robustness, adaptability, and scalability) at a satisfactory level. As an example, the method is applied to fault diagnosis in HVAC systems, an area with considerable modeling and sensor network constraints.