977 resultados para Short Loadlength, Fast Algorithms
Resumo:
In this paper, we formulate the electricity retailers’ short-term decision-making problem in a liberalized retail market as a multi-objective optimization model. Retailers with light physical assets, such as generation and storage units in the distribution network, are considered. Following advances in smart grid technologies, electricity retailers are becoming able to employ incentive-based demand response (DR) programs in addition to their physical assets to effectively manage the risks of market price and load variations. In this model, the DR scheduling is performed simultaneously with the dispatch of generation and storage units. The ultimate goal is to find the optimal values of the hourly financial incentives offered to the end-users. The proposed model considers the capacity obligations imposed on retailers by the grid operator. The profit seeking retailer also has the objective to minimize the peak demand to avoid the high capacity charges in form of grid tariffs or penalties. The non-dominated sorting genetic algorithm II (NSGA-II) is used to solve the multi-objective problem. It is a fast and elitist multi-objective evolutionary algorithm. A case study is solved to illustrate the efficient performance of the proposed methodology. Simulation results show the effectiveness of the model for designing the incentive-based DR programs and indicate the efficiency of NSGA-II in solving the retailers’ multi-objective problem.
Resumo:
The current study describes the in vitro phosphorylation of a human hair keratin, using protein kinase for the first time. Phosphorylation of keratin was demonstrated by 31P NMR (Nuclear Magnetic Resonance) and Diffuse Reflectance Infrared Fourier Transform (DRIFT) techniques. Phosphorylation induced a 2.5 fold increase of adsorption capacity in the first 10 minutes for cationic moiety like Methylene Blue (MB). Thorough description of MB adsorption process was performed by several isothermal models. Reconstructed fluorescent microscopy images depict distinct amounts of dye bound to the differently treated hair. The results of this work suggest that the enzymatic phosphorylation of keratins might have significant implications in hair shampooing and conditioning, where short application times of cationic components are of prime importance.
Resumo:
Dissertação de mestrado integrado em Engenharia Mecânica
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
A technique for fast imaging of regional myocardial function using a spiral acquisition in combination with strain-encoded (SENC) magnetic resonance imaging (MRI) is presented in this paper. This technique, which is termed fast-SENC, enables scan durations as short as a single heartbeat. A reduced field of view (FOV) without foldover artifacts was achieved by localized SENC, which selectively excited the region around the heart. The two images required for SENC imaging (low- and high-tuning) were acquired in an interleaved fashion throughout the cardiac cycle to further shorten the scan time. Regional circumferential contraction and longitudinal shortening of both the left ventricle (LV) and right ventricle (RV) were examined in long- and short-axis views, respectively. The in vivo results obtained from five human subjects and five infarcted dogs are presented. The results of the fast-SENC technique in a single heartbeat acquisition were comparable to those obtained by conventional SENC in a long acquisition time. Therefore, fast-SENC may prove useful for imaging during stress or arrhythmia.
Resumo:
Peripheral nerve injuries with loss of nervous tissue are a significant clinical problem and are currently treated using autologous nerve transplants. To avoid the need for donor nerve, which results in additional morbidity such as loss of sensation and scarring, alternative bridging methods have been sought. Recently we showed that an artificial nerve conduit moulded from fibrin glue is biocompatible to nerve regeneration. In this present study, we have used the fibrin conduit or a nerve graft to bridge either a 10 mm or 20 mm sciatic nerve gap and analyzed the muscle recovery in adult rats after 16 weeks. The gastrocnemius muscle weights of the operated side were similar for both gap sizes when treated with nerve graft. In contrast, muscle weight was 48.32 ± 4.96% of the contra-lateral side for the 10 mm gap repaired with fibrin conduit but only 25.20 ± 2.50% for the 20 mm gap repaired with fibrin conduit. The morphology of the muscles in the nerve graft groups showed an intact, ordered structure, with the muscle fibers grouped in fascicles whereas the 20 mm nerve gap fibrin group had a more chaotic appearance. The mean area and diameter of fast type fibers in the 20 mm gap repaired with fibrin conduits were significantly (P<0.01) worse than those of the corresponding 10 mm gap group. In contrast, both gap sizes treated with nerve graft showed similar fiber size. Furthermore, the 10 mm gaps repaired with either nerve graft or fibrin conduit showed similar muscle fiber size. These results indicate that the fibrin conduit can effectively treat short nerve gaps but further modification such as the inclusion of regenerative cells may be required to attain the outcomes of nerve graft for long gaps.
Resumo:
RATIONALE AND OBJECTIVES: Recent developments of MR imaging equipment enabled high-quality steady state-free-precession (Balanced FFE, True-FISP) MR-imaging with a substantial 'T2 like' contrast, resulting in a high signal intensity of the blood-pool without the application of exogenous contrast agents. It is hypothesized that Balanced-FFE may be valuable for contrast enhancement in 3D free-breathing coronary MRA. MATERIALS AND METHODS: Navigator-gated free-breathing cardiac triggered coronary MRA was performed in 10 healthy adult subjects and three patients with radiograph defined coronary artery disease using a segmented k-space 3D Balanced FFE imaging sequence. RESULTS: High contrast-to-noise ratio between the blood-pool and the myocardium (29 +/- 8) and long segment visualization of both coronary arteries could be obtained in about 5 minutes during free breathing using the present navigator-gated Balanced-FFE coronary MRA approach. First patient results demonstrated successful display of coronary artery stenoses. CONCLUSION: Balanced FFE offers a potential alternative for endogenous contrast enhancement in navigator-gated free-breathing 3D coronary MRA. The obtained results together with the relatively short scanning time warrant further studies in larger patient collectives.
Compressed Sensing Single-Breath-Hold CMR for Fast Quantification of LV Function, Volumes, and Mass.
Resumo:
OBJECTIVES: The purpose of this study was to compare a novel compressed sensing (CS)-based single-breath-hold multislice magnetic resonance cine technique with the standard multi-breath-hold technique for the assessment of left ventricular (LV) volumes and function. BACKGROUND: Cardiac magnetic resonance is generally accepted as the gold standard for LV volume and function assessment. LV function is 1 of the most important cardiac parameters for diagnosis and the monitoring of treatment effects. Recently, CS techniques have emerged as a means to accelerate data acquisition. METHODS: The prototype CS cine sequence acquires 3 long-axis and 4 short-axis cine loops in 1 single breath-hold (temporal/spatial resolution: 30 ms/1.5 × 1.5 mm(2); acceleration factor 11.0) to measure left ventricular ejection fraction (LVEFCS) as well as LV volumes and LV mass using LV model-based 4D software. For comparison, a conventional stack of multi-breath-hold cine images was acquired (temporal/spatial resolution 40 ms/1.2 × 1.6 mm(2)). As a reference for the left ventricular stroke volume (LVSV), aortic flow was measured by phase-contrast acquisition. RESULTS: In 94% of the 33 participants (12 volunteers: mean age 33 ± 7 years; 21 patients: mean age 63 ± 13 years with different LV pathologies), the image quality of the CS acquisitions was excellent. LVEFCS and LVEFstandard were similar (48.5 ± 15.9% vs. 49.8 ± 15.8%; p = 0.11; r = 0.96; slope 0.97; p < 0.00001). Agreement of LVSVCS with aortic flow was superior to that of LVSVstandard (overestimation vs. aortic flow: 5.6 ± 6.5 ml vs. 16.2 ± 11.7 ml, respectively; p = 0.012) with less variability (r = 0.91; p < 0.00001 for the CS technique vs. r = 0.71; p < 0.01 for the standard technique). The intraobserver and interobserver agreement for all CS parameters was good (slopes 0.93 to 1.06; r = 0.90 to 0.99). CONCLUSIONS: The results demonstrated the feasibility of applying the CS strategy to evaluate LV function and volumes with high accuracy in patients. The single-breath-hold CS strategy has the potential to replace the multi-breath-hold standard cardiac magnetic resonance technique.
Resumo:
One of the most effective techniques offering QoS routing is minimum interference routing. However, it is complex in terms of computation time and is not oriented toward improving the network protection level. In order to include better levels of protection, new minimum interference routing algorithms are necessary. Minimizing the failure recovery time is also a complex process involving different failure recovery phases. Some of these phases depend completely on correct routing selection, such as minimizing the failure notification time. The level of protection also involves other aspects, such as the amount of resources used. In this case shared backup techniques should be considered. Therefore, minimum interference techniques should also be modified in order to include sharing resources for protection in their objectives. These aspects are reviewed and analyzed in this article, and a new proposal combining minimum interference with fast protection using shared segment backups is introduced. Results show that our proposed method improves both minimization of the request rejection ratio and the percentage of bandwidth allocated to backup paths in networks with low and medium protection requirements
Resumo:
The drug discovery process has been deeply transformed recently by the use of computational ligand-based or structure-based methods, helping the lead compounds identification and optimization, and finally the delivery of new drug candidates more quickly and at lower cost. Structure-based computational methods for drug discovery mainly involve ligand-protein docking and rapid binding free energy estimation, both of which require force field parameterization for many drug candidates. Here, we present a fast force field generation tool, called SwissParam, able to generate, for arbitrary small organic molecule, topologies, and parameters based on the Merck molecular force field, but in a functional form that is compatible with the CHARMM force field. Output files can be used with CHARMM or GROMACS. The topologies and parameters generated by SwissParam are used by the docking software EADock2 and EADock DSS to describe the small molecules to be docked, whereas the protein is described by the CHARMM force field, and allow them to reach success rates ranging from 56 to 78%. We have also developed a rapid binding free energy estimation approach, using SwissParam for ligands and CHARMM22/27 for proteins, which requires only a short minimization to reproduce the experimental binding free energy of 214 ligand-protein complexes involving 62 different proteins, with a standard error of 2.0 kcal mol(-1), and a correlation coefficient of 0.74. Together, these results demonstrate the relevance of using SwissParam topologies and parameters to describe small organic molecules in computer-aided drug design applications, together with a CHARMM22/27 description of the target protein. SwissParam is available free of charge for academic users at www.swissparam.ch.
Resumo:
The aim of this project is to get used to another kind of programming. Since now, I used very complex programming languages to develop applications or even to program microcontrollers, but PicoCricket system is the evidence that we don’t need so complex development tools to get functional devices. PicoCricket system is the clear example of simple programming to make devices work the way we programmed it. There’s an easy but effective way to program small, devices just saying what we want them to do. We cannot do complex algorithms and mathematical operations but we can program them in a short time. Nowadays, the easier and faster we produce, the more we earn. So the tendency is to develop fast, cheap and easy, and PicoCricket system can do it.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
Here we discuss two consecutive MERLIN observations of the X-ray binary LS I +61◦303. The first observation shows a double-sided jet extending up to about 200 AU on both sides of a central source. The jet shows a bent S-shaped struct ure similar to the one displayed by the well-known precessing jet of SS 433. The precession suggested in the first MERLIN image becomes evident in the second one, showing a one-sided bent jet significantly rotated with respect to the jet of the day before. We conclude that the derived precession of the relativistic (β=0.6) jet explains puzzling previous VLBI results. Moreover , the fact that the precession is fast could be the explanation of the never understood short term (days) variability of the associated gamma-ray source 2CG 135 + 01 / 3EG J0241 + 6103.