926 resultados para Power flow algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frontal alpha band asymmetry (FAA) is a marker of altered reward processing in major depressive disorder (MDD), associated with reduced approach behavior and withdrawal. However, its association with brain metabolism remains unclear. The aim of this study is to investigate FAA and its correlation with resting – state cerebral blood flow (rCBF). We hypothesized an association of FAA with regional rCBF in brain regions relevant for reward processing and motivated behavior, such as the striatum. We enrolled 20 patients and 19 healthy subjects. FAA scores and rCBF were quantified with the use of EEG and arterial spin labeling. Correlations of the two were evaluated, as well as the association with FAA and psychometric assessments of motivated behavior and anhedonia. Patients showed a left – lateralized pattern of frontal alpha activity and a correlation of FAA lateralization with subscores of Hamilton Depression Rating Scale linked to motivated behavior. An association of rCBF and FAA scores was found in clusters in the dorsolateral prefrontal cortex bilaterally (patients) and in the left medial frontal gyrus, in the right caudate head and in the right inferior parietal lobule (whole group). No correlations were found in healthy controls. Higher inhibitory right – lateralized alpha power was associated with lower rCBF values in prefrontal and striatal regions, predominantly in the right hemisphere, which are involved in the processing of motivated behavior and reward. Inhibitory brain activity in the reward system may contribute to some of the motivational problems observed in MDD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maternal thromboembolism and a spectrum of placenta-mediated complications including the pre-eclampsia syndromes, fetal growth restriction, fetal loss, and abruption manifest a shared etiopathogenesis and predisposing risk factors. Furthermore, these maternal and fetal complications are often linked to subsequent maternal health consequences that comprise the metabolic syndrome, namely, thromboembolism, chronic hypertension, and type II diabetes. Traditionally, several lines of evidence have linked vasoconstriction, excessive thrombosis and inflammation, and impaired trophoblast invasion at the uteroplacental interface as hallmark features of the placental complications. "Omic" technologies and biomarker development have been largely based upon advances in vascular biology, improved understanding of the molecular basis and biochemical pathways responsible for the clinically relevant diseases, and increasingly robust large cohort and/or registry based studies. Advances in understanding of innate and adaptive immunity appear to play an important role in several pregnancy complications. Strategies aimed at improving prediction of these pregnancy complications are often incorporating hemodynamic blood flow data using non-invasive imaging technologies of the utero-placental and maternal circulations early in pregnancy. Some evidence suggests that a multiple marker approach will yield the best performing prediction tools, which may then in turn offer the possibility of early intervention to prevent or ameliorate these pregnancy complications. Prediction of maternal cardiovascular and non-cardiovascular consequences following pregnancy represents an important area of future research, which may have significant public health consequences not only for cardiovascular disease, but also for a variety of other disorders, such as autoimmune and neurodegenerative diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SNP genotyping arrays have been developed to characterize single-nucleotide polymorphisms (SNPs) and DNA copy number variations (CNVs). The quality of the inferences about copy number can be affected by many factors including batch effects, DNA sample preparation, signal processing, and analytical approach. Nonparametric and model-based statistical algorithms have been developed to detect CNVs from SNP genotyping data. However, these algorithms lack specificity to detect small CNVs due to the high false positive rate when calling CNVs based on the intensity values. Association tests based on detected CNVs therefore lack power even if the CNVs affecting disease risk are common. In this research, by combining an existing Hidden Markov Model (HMM) and the logistic regression model, a new genome-wide logistic regression algorithm was developed to detect CNV associations with diseases. We showed that the new algorithm is more sensitive and can be more powerful in detecting CNV associations with diseases than an existing popular algorithm, especially when the CNV association signal is weak and a limited number of SNPs are located in the CNV.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Literature on agency problems arising between controlling and minority owners claim that separation of cash flow and control rights allows controllers to expropriate listed firms, and further that separation emerges when dual class shares or pyramiding corporate structures exist. Dual class share and pyramiding coexisted in listed companies of China until discriminated share reform was implemented in 2005. This paper presents a model of controller to expropriate behavior as well as empirical tests of expropriation via particular accounting items and pyramiding generated expropriation. Results show that expropriation is apparent for state controlled listed companies. While reforms have weakened the power to expropriate, separation remains and still generates expropriation. Size of expropriation is estimated to be 7 to 8 per cent of total asset at mean. If the "one share, one vote" principle were to be realized, asset inflation could be reduced by 13 percent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instability analysis of compressible orthogonal swept leading-edge boundary layer flow was performed in the context of BiGlobal linear theory. 1, 2 An algorithm was developed exploiting the sparsity characteristics of the matrix discretizing the PDE-based eigenvalue problem. This allowed use of the MUMPS sparse linear algebra package 3 to obtain a direct solution of the linear systems associated with the Arnoldi iteration. The developed algorithm was then applied to efficiently analyze the effect of compressibility on the stability of the swept leading-edge boundary layer and obtain neutral curves of this flow as a function of the Mach number in the range 0 ≤ Ma ≤ 1. The present numerical results fully confirmed the asymptotic theory results of Theofilis et al. 4 Up to the maximum Mach number value studied, it was found that an increase of this parameter reduces the critical Reynolds number and the range of the unstable spanwise wavenumbers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a low-power, high-speed 4-data-path 128-point mixed-radix (radix-2 & radix-2 2 ) FFT processor for MB-OFDM Ultra-WideBand (UWB) systems. The processor employs the single-path delay feedback (SDF) pipelined structure for the proposed algorithm, it uses substructure-sharing multiplication units and shift-add structure other than traditional complex multipliers. Furthermore, the word lengths are properly chosen, thus the hardware costs and power consumption of the proposed FFT processor are efficiently reduced. The proposed FFT processor is verified and synthesized by using 0.13 µm CMOS technology with a supply voltage of 1.32 V. The implementation results indicate that the proposed 128-point mixed-radix FFT architecture supports a throughput rate of 1Gsample/s with lower power consumption in comparison to existing 128-point FFT architectures

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A low complex but highly-efficient object counter algorithm is presented that can be embedded in hardware with a low computational power. This is achieved by a novel soft-data association strategy that can handle multimodal distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract interpretation-based data-flow analysis of logic programs is at this point relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Such problems relate to dealing correctly with all builtins, including meta-logical and extra-logical predicates, with dynamic predicates (where the program is modified during execution), and with the absence of certain program text during compilation. Existing proposals for dealing with such issues generally restrict in one way or another the classes of programs which can be analyzed if the information from analysis is to be used for program optimization. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially following the recently proposed ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs using the full power of the language.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of the power-line communication (PLC) channel are difficult to model due to the heterogeneity of the networks and the lack of common wiring practices. To obtain the full variability of the PLC channel, random channel generators are of great importance for the design and testing of communication algorithms. In this respect, we propose a random channel generator that is based on the top-down approach. Basically, we describe the multipath propagation and the coupling effects with an analytical model. We introduce the variability into a restricted set of parameters and, finally, we fit the model to a set of measured channels. The proposed model enables a closed-form description of both the mean path-loss profile and the statistical correlation function of the channel frequency response. As an example of application, we apply the procedure to a set of in-home measured channels in the band 2-100 MHz whose statistics are available in the literature. The measured channels are divided into nine classes according to their channel capacity. We provide the parameters for the random generation of channels for all nine classes, and we show that the results are consistent with the experimental ones. Finally, we merge the classes to capture the entire heterogeneity of in-home PLC channels. In detail, we introduce the class occurrence probability, and we present a random channel generator that targets the ensemble of all nine classes. The statistics of the composite set of channels are also studied, and they are compared to the results of experimental measurement campaigns in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power losses due to wind turbine wakes are of the order of 10 and 20% of total power output in large wind farms. The focus of this research carried out within the EC funded UPWIND project is wind speed and turbulence modelling for large wind farms/wind turbines in complex terrain and offshore in order to optimise wind farm layouts to reduce wake losses and loads.