957 resultados para Speed up
Resumo:
It is well known that the early initiation of a specific antiinfective therapy is crucial to reduce the mortality in severe infection. Procedures culturing pathogens are the diagnostic gold standard in such diseases. However, these methods yield results earliest between 24 to 48 hours. Therefore, severe infections such as sepsis need to be treated with an empirical antimicrobial therapy, which is ineffective in an unknown fraction of these patients. Today's microbiological point of care tests are pathogen specific and therefore not appropriate for an infection with a variety of possible pathogens. Molecular nucleic acid diagnostics such as polymerase chain reaction (PCR) allow the identification of pathogens and resistances. These methods are used routinely to speed up the analysis of positive blood cultures. The newest PCR based system allows the identification of the 25 most frequent sepsis pathogens by PCR in parallel without previous culture in less than 6 hours. Thereby, these systems might shorten the time of possibly insufficient antiinfective therapy. However, these extensive tools are not suitable as point of care diagnostics. Miniaturization and automating of the nucleic acid based method is pending, as well as an increase of detectable pathogens and resistance genes by these methods. It is assumed that molecular PCR techniques will have an increasing impact on microbiological diagnostics in the future.
Resumo:
Osteoarticular allograft is one possible treatment in wide surgical resections with large defects. Performing best osteoarticular allograft selection is of great relevance for optimal exploitation of the bone databank, good surgery outcome and patient’s recovery. Current approaches are, however, very time consuming hindering these points in practice. We present a validation study of a software able to perform automatic bone measurements used to automatically assess the distal femur sizes across a databank. 170 distal femur surfaces were reconstructed from CT data and measured manually using a size measure protocol taking into account the transepicondyler distance (A), anterior-posterior distance in medial condyle (B) and anterior-posterior distance in lateral condyle (C). Intra- and inter-observer studies were conducted and regarded as ground truth measurements. Manual and automatic measures were compared. For the automatic measurements, the correlation coefficients between observer one and automatic method, were of 0.99 for A measure and 0.96 for B and C measures. The average time needed to perform the measurements was of 16 h for both manual measurements, and of 3 min for the automatic method. Results demonstrate the high reliability and, most importantly, high repeatability of the proposed approach, and considerable speed-up on the planning.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
The discovery of binary dendritic events such as local NMDA spikes in dendritic subbranches led to the suggestion that dendritic trees could be computationally equivalent to a 2-layer network of point neurons, with a single output unit represented by the soma, and input units represented by the dendritic branches. Although this interpretation endows a neuron with a high computational power, it is functionally not clear why nature would have preferred the dendritic solution with a single but complex neuron, as opposed to the network solution with many but simple units. We show that the dendritic solution has a distinguished advantage over the network solution when considering different learning tasks. Its key property is that the dendritic branches receive an immediate feedback from the somatic output spike, while in the corresponding network architecture the feedback would require additional backpropagating connections to the input units. Assuming a reinforcement learning scenario we formally derive a learning rule for the synaptic contacts on the individual dendritic trees which depends on the presynaptic activity, the local NMDA spikes, the somatic action potential, and a delayed reinforcement signal. We test the model for two scenarios: the learning of binary classifications and of precise spike timings. We show that the immediate feedback represented by the backpropagating action potential supplies the individual dendritic branches with enough information to efficiently adapt their synapses and to speed up the learning process.
Resumo:
A growing world population, changing climate and limiting fossil fuels will provide new pressures on human production of food, medicine, fuels and feed stock in the twenty-first century. Enhanced crop production promises to ameliorate these pressures. Crops can be bred for increased yields of calories, starch, nutrients, natural medicinal compounds, and other important products. Enhanced resistance to biotic and abiotic stresses can be introduced, toxins removed, and industrial qualities such as fibre strength and biofuel per mass can be increased. Induced and natural mutations provide a powerful method for the generation of heritable enhanced traits. While mainly exploited in forward, phenotype driven, approaches, the rapid accumulation of plant genomic sequence information and hypotheses regarding gene function allows the use of mutations in reverse genetic approaches to identify lesions in specific target genes. Such gene-driven approaches promise to speed up the process of creating novel phenotypes, and can enable the generation of phenotypes unobtainable by traditional forward methods. TILLING (Targeting Induced Local Lesions IN Genome) is a high-throughput and low cost reverse genetic method for the discovery of induced mutations. The method has been modified for the identification of natural nucleotide polymorphisms, a process called Ecotilling. The methods are general and have been applied to many species, including a variety of different crops. In this chapter the current status of the TILLING and Ecotilling methods and provide an overview of progress in applying these methods to different plant species, with a focus on work related to food production for developing nations.
Resumo:
Ulcerated diabetic foot is a complex problem. Ischaemia, neuropathy and infection are the three pathological components that lead to diabetic foot complications, and they frequently occur together as an aetiologic triad. Neuropathy and ischaemia are the initiating factors, most often together as neuroischaemia, whereas infection is mostly a consequence. The role of peripheral arterial disease in diabetic foot has long been underestimated as typical ischaemic symptoms are less frequent in diabetics with ischaemia than in non-diabetics. Furthermore, the healing of a neuroischaemic ulcer is hampered by microvascular dysfunction. Therefore, the threshold for revascularising neuroischaemic ulcers should be lower than that for purely ischaemic ulcers. Previous guidelines have largely ignored these specific demands related to ulcerated neuroischaemic diabetic feet. Any diabetic foot ulcer should always be considered to have vascular impairment unless otherwise proven. Early referral, non-invasive vascular testing, imaging and intervention are crucial to improve diabetic foot ulcer healing and to prevent amputation. Timing is essential, as the window of opportunity to heal the ulcer and save the leg is easily missed. This chapter underlines the paucity of data on the best way to diagnose and treat these diabetic patients. Most of the studies dealing with neuroischaemic diabetic feet are not comparable in terms of patient populations, interventions or outcome. Therefore, there is an urgent need for a paradigm shift in diabetic foot care; that is, a new approach and classification of diabetics with vascular impairment in regard to clinical practice and research. A multidisciplinary approach needs to implemented systematically with a vascular surgeon as an integrated member. New strategies must be developed and implemented for diabetic foot patients with vascular impairment, to improve healing, to speed up healing rate and to avoid amputation, irrespective of the intervention technology chosen. Focused studies on the value of predictive tests, new treatment modalities as well as selective and targeted strategies are needed. As specific data on ulcerated neuroischaemic diabetic feet are scarce, recommendations are often of low grade.
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)
Resumo:
Three comprehensive one-dimensional simulators were used on the same PC to simulate the dynamics of different electrophoretic configurations, including two migrating hybrid boundaries, an isotachophoretic boundary and the zone electrophoretic separation of ten monovalent anions. Two simulators, SIMUL5 and GENTRANS, use a uniform grid, while SPRESSO uses a dynamic adaptive grid. The simulators differ in the way components are handled. SIMUL5 and SPRESSO feature one equation for all components, whereas GENTRANS is based on the use of separate modules for the different types of monovalent components, a module for multivalent components and a module for proteins. The code for multivalent components is executed more slowly compared to those for monovalent components. Furthermore, with SIMUL5, the computational time interval becomes smaller when it is operated with a reduced calculation space that features moving borders, whereas GENTRANS offers the possibility of using data smoothing (removal of negative concentrations), which can avoid numerical oscillations and speed up a simulation. SPRESSO with its adaptive grid could be employed to simulate the same configurations with smaller numbers of grid points and thus is faster in certain but not all cases. The data reveal that simulations featuring a large number of monovalent components distributed such that a high mesh is required throughout a large proportion of the column are fastest executed with GENTRANS.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.
Resumo:
Purpose: Mismatches between pump output and venous return in a continuous-flow ventricular assist device may elicit episodes of ventricular suction. This research describes a series of in vitro experiments to characterize the operating conditions under which the EVAHEART centrifugal blood pump (Sun Medical Technology Research Corp., Nagano, Japan) can be operated with minimal concern regarding left ventricular (LV) suction. Methods: The pump was interposed into a pneumatically driven pulsatile mock circulatory system (MCS) in the ventricular apex to aorta configuration. Under varying conditions of preload, afterload, and systolic pressure, the speed of the pump was increased step-wise until suction was observed. Identification of suction was based on pump inlet pressure. Results: In the case of reduced LV systolic pressure, reduced preload (=10 mmHg), and afterload (=60 mmHg), suction was observed for speeds =2,200 rpm. However, suction did not occur at any speed (up to a maximum speed of 2,400 rpm) when preload was kept within 10-14 mmHg and afterload =80 mmHg. Although in vitro experiments cannot replace in vivo models, the results indicated that ventricular suction can be avoided if sufficient preload and afterload are maintained. Conclusion: Conditions of hypovolemia and/or hypotension may increase the risk of suction at the highest speeds, irrespective of the native ventricular systolic pressure. However, in vitro guidelines are not directly transferrable to the clinical situation; therefore, patient-specific evaluation is recommended, which can be aided by ultrasonography at various points in the course of support.
Resumo:
Particulate matter (PM) emissions standards set by the US Environmental Protection Agency (EPA) have become increasingly stringent over the years. The EPA regulation for PM in heavy duty diesel engines has been reduced to 0.01 g/bhp-hr for the year 2010. Heavy duty diesel engines make use of an aftertreatment filtration device, the Diesel Particulate Filter (DPF). DPFs are highly efficient in filtering PM (known as soot) and are an integral part of 2010 heavy duty diesel aftertreatment system. PM is accumulated in the DPF as the exhaust gas flows through it. This PM needs to be removed by oxidation periodically for the efficient functioning of the filter. This oxidation process is also known as regeneration. There are 2 types of regeneration processes, namely active regeneration (oxidation of PM by external means) and passive oxidation (oxidation of PM by internal means). Active regeneration occurs typically in high temperature regions, about 500 - 600 °C, which is much higher than normal diesel exhaust temperatures. Thus, the exhaust temperature has to be raised with the help of external devices like a Diesel Oxidation Catalyst (DOC) or a fuel burner. The O2 oxidizes PM producing CO2 as oxidation product. In passive oxidation, one way of regeneration is by the use of NO2. NO2 oxidizes the PM producing NO and CO2 as oxidation products. The passive oxidation process occurs at lower temperatures (200 - 400 °C) in comparison to the active regeneration temperatures. Generally, DPF substrate walls are washcoated with catalyst material to speed up the rate of PM oxidation. The catalyst washcoat is observed to increase the rate of PM oxidation. The goal of this research is to develop a simple mathematical model to simulate the PM depletion during the active regeneration process in a DPF (catalyzed and non-catalyzed). A simple, zero-dimensional kinetic model was developed in MATLAB. Experimental data required for calibration was obtained by active regeneration experiments performed on PM loaded mini DPFs in an automated flow reactor. The DPFs were loaded with PM from the exhaust of a commercial heavy duty diesel engine. The model was calibrated to the data obtained from active regeneration experiments. Numerical gradient based optimization techniques were used to estimate the kinetic parameters of the model.
Resumo:
Mower is a micro-architecture technique which targets branch misprediction penalties in superscalar processors. It speeds-up the misprediction recovery process by dynamically evicting stale instructions and fixing the RAT (Register Alias Table) using explicit branch dependency tracking. Tracking branch dependencies is accomplished by using simple bit matrices. This low-overhead technique allows overlapping of the recovery process with instruction fetching, renaming and scheduling from the correct path. Our evaluation of the mechanism indicates that it yields performance very close to ideal recovery and provides up to 5% speed-up and 2% reduction in power consumption compared to a traditional recovery mechanism using a reorder buffer and a walker. The simplicity of the mechanism should permit easy implementation of Mower in an actual processor.
Resumo:
In this thesis, I study skin lesion detection and its applications to skin cancer diagnosis. A skin lesion detection algorithm is proposed. The proposed algorithm is based color information and threshold. For the proposed algorithm, several color spaces are studied and the detection results are compared. Experimental results show that YUV color space can achieve the best performance. Besides, I develop a distance histogram based threshold selection method and the method is proven to be better than other adaptive threshold selection methods for color detection. Besides the detection algorithms, I also investigate GPU speed-up techniques for skin lesion extraction and the results show that GPU has potential applications in speeding-up skin lesion extraction. Based on the skin lesion detection algorithms proposed, I developed a mobile-based skin cancer diagnosis application. In this application, the user with an iPhone installed with the proposed application can use the iPhone as a diagnosis tool to find the potential skin lesions in a persons' skin and compare the skin lesions detected by the iPhone with the skin lesions stored in a database in a remote server.
Resumo:
While sound and video may capture viewers' attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution is this new type of interactive content, offered in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop multimedia authoring package Macromedia Director into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals. The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper. In the following sections, an overview of the operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services. The next section examines a number of metadata languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services.
Resumo:
P-GENESIS is an extension to the GENESIS neural simulator that allows users to take advantage of parallel machines to speed up the simulation of their network models or concurrently simulate multiple models. P-GENESIS adds several commands to the GENESIS script language that let a script running on one processor execute remote procedure calls on other processors, and that let a script synchronize its execution with the scripts running on other processors. We present here some brief comments on the mechanisms underlying parallel script execution. We also offer advice on parallelizing parameter searches, partitioning network models, and selecting suitable parallel hardware on which to run P-GENESIS.