960 resultados para Real Electricity Markets Data
Resumo:
The proposed method to analyze the composition of the cost of electricity is based on the energy conversion processes and the destruction of the exergy through the several thermodynamic processes that comprise a combined cycle power plant. The method uses thermoeconomics to evaluate and allocate the cost of exergy throughout the processes, considering costs related to inputs and investment in equipment. Although the concept may be applied to any combined cycle or cogeneration plant, this work develops only the mathematical modeling for three-pressure heat recovery steam generator (HRSG) configurations and total condensation of the produced steam. It is possible to study any n x 1 plant configuration (n sets of gas turbine and HRSGs associated to one steam turbine generator and condenser) with the developed model, assuming that every train operates identically and in steady state. The presented model was conceived from a complex configuration of a real power plant, over which variations may be applied in order to adapt it to a defined configuration under study [Borelli SJS. Method for the analysis of the composition of electricity costs in combined cycle thermoelectric power plants. Master in Energy Dissertation, Interdisciplinary Program of Energy, Institute of Eletro-technical and Energy, University of Sao Paulo, Sao Paulo, Brazil, 2005 (in Portuguese)]. The variations and adaptations include, for instance, use of reheat, supplementary firing and partial load operation. It is also possible to undertake sensitivity analysis on geometrical equipment parameters. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this work, an axisymmetric two-dimensional finite element model was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. The level of film residual stress (sigma(r)), the film elastic modulus (E) and the film work hardening exponent (n) were varied to analyze their effects on indentation data. These numerical results were used to analyze experimental data that were obtained with titanium nitride coated specimens, in which the substrate bias applied during deposition was modified to obtain films with different levels of sigma(r). Good qualitative correlation was obtained when numerical and experimental results were compared, as long as all film properties are considered in the analyses, and not only sigma(r). The numerical analyses were also used to further understand the effect of sigma(r) on the mechanical properties calculated based on instrumented indentation data. In this case, the hardness values obtained based on real or calculated contact areas are similar only when sink-in occurs, i.e. with high n or high ratio VIE, where Y is the yield strength of the film. In an additional analysis, four ratios (R/h(max)) between indenter tip radius and maximum penetration depth were simulated to analyze the combined effects of R and sigma(r) on the indentation load-displacement curves. In this case, or did not significantly affect the load curve exponent, which was affected only by the indenter tip radius. On the other hand, the proportional curvature coefficient was significantly affected by sigma(r) and n. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.
Resumo:
This study describes the pedagogical impact of real-world experimental projects undertaken as part of an advanced undergraduate Fluid Mechanics subject at an Australian university. The projects have been organised to complement traditional lectures and introduce students to the challenges of professional design, physical modelling, data collection and analysis. The physical model studies combine experimental, analytical and numerical work in order to develop students’ abilities to tackle real-world problems. A first study illustrates the differences between ideal and real fluid flow force predictions based upon model tests of buildings in a large size wind tunnel used for research and professional testing. A second study introduces the complexity arising from unsteady non-uniform wave loading on a sheltered pile. The teaching initiative is supported by feedback from undergraduate students. The pedagogy of the course and projects is discussed with reference to experiential, project-based and collaborative learning. The practical work complements traditional lectures and tutorials, and provides opportunities which cannot be learnt in the classroom, real or virtual. Student feedback demonstrates a strong interest for the project phases of the course. This was associated with greater motivation for the course, leading in turn to lower failure rates. In terms of learning outcomes, the primary aim is to enable students to deliver a professional report as the final product, where physical model data are compared to ideal-fluid flow calculations and real-fluid flow analyses. Thus the students are exposed to a professional design approach involving a high level of expertise in fluid mechanics, with sufficient academic guidance to achieve carefully defined learning goals, while retaining sufficient flexibility for students to construct there own learning goals. The overall pedagogy is a blend of problem-based and project-based learning, which reflects academic research and professional practice. The assessment is a mix of peer-assessed oral presentations and written reports that aims to maximise student reflection and development. Student feedback indicated a strong motivation for courses that include a well-designed project component.
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.
Resumo:
Background: There is a paucity of information describing the real-time 3-dimensional echocardiography (RT3DE) and dyssynchrony indexes (DIs) of a normal population. We evaluate the RT3DE DIs in a population with normal electrocardiograms and 2- and 3-dimensional echocardiographic analyses. This information is relevant for cardiac resynchronization therapy. Methods: We evaluated 131 healthy volunteers (73 were male, aged 46 +/- 14 years) who were referred for routine echocardiography; who presented normal cardiac structure on electrocardiography, 2-dimensional echocardiography, and RT3DE; and who had no history of cardiac diseases. We analyzed 3-dimensional left ventricular ejection fraction, left ventricle end-diastolic volume, left ventricle end-systolic volume, and left ventricular systolic DI% (6-, 12-, and 16-segment models). RT3DE data were analyzed by quantifying the statistical distribution (mean, median, standard deviation [SD], relative SD, coefficient of skewness, coefficient of kurtosis, Kolmogorov-Smirnov test, D`Agostino-Pearson test, percentiles, and 95% confidence interval). Results: Left ventricular ejection fraction ranged from 50% to 80% (66.1% +/- 7.1%); left ventricle end-diastolic volume ranged from 39.8 to 145 mL (79.1 +/- 24.9 mL); left ventricle end-systolic volume ranged from 12.9 to 66 mL (27 +/- 12.1 mL); 6-segment DI% ranged from 0.20% to 3.80% (1.21% +/- 0.66%), median: 1.06, relative SD: 0.5482, coefficient of skewness: 1.2620 (P < .0001), coefficient of Kurtosis: 1.9956 (P = .0039); percentile 2.5%: 0.2900, percentile 97.5%: 2.8300; 12-segment DI% ranged from 0.22% to 4.01% (1.29% +/- 0.71%), median: 1.14, relative SD: 0.95, coefficient of skewness: 1.1089 (P < .0001), coefficient of Kurtosis: 1.6372 (P = .0100), percentile 2.5%: 0.2850, percentile 97.5%: 3.0700; and 16-segment DI% ranged from 0.29% to 4.88% (1.59 +/- 0.99), median: 1.39, relative SD: 0.56, coefficient of skewness: 1.0792 (P < .0001), coefficient of Kurtosis: 0.9248 (P = .07), percentile 2.5%: 0.3750, percentile 97.5%: 3.750. Conclusion: This study allows for the quantification of RT3DE DIs in normal subjects, providing a comparison for patients with heart failure who may be candidates for cardiac resynchronization therapy. (J Am Soc Echocardiogr 2008; 21: 1229-1235)
Resumo:
Objectives: Pneumothorax is a frequent complication during mechanical ventilation. Electrical impedance tomography (EIT) is a noninvasive tool that allows real-time imaging of regional ventilation. The purpose of this study was to 1) identify characteristic changes in the EIT signals associated with pneumothoraces; 2) develop and fine-tune an algorithm for their automatic detection; and 3) prospectively evaluate this algorithm for its sensitivity and specificity in detecting pneumothoraces in real time. Design: Prospective controlled laboratory animal investigation. Setting: Experimental Pulmonology Laboratory of the University of Sao Paulo. Subjects: Thirty-nine anesthetized mechanically ventilated supine pigs (31.0 +/- 3.2 kg, mean +/- SD). Interventions. In a first group of 18 animals monitored by EIT, we either injected progressive amounts of air (from 20 to 500 mL) through chest tubes or applied large positive end-expiratory pressure (PEEP) increments to simulate extreme lung overdistension. This first data set was used to calibrate an EIT-based pneumothorax detection algorithm. Subsequently, we evaluated the real-time performance of the detection algorithm in 21 additional animals (with normal or preinjured lungs), submitted to multiple ventilatory interventions or traumatic punctures of the lung. Measurements and Main Results: Primary EIT relative images were acquired online (50 images/sec) and processed according to a few imaging-analysis routines running automatically and in parallel. Pneumothoraces as small as 20 mL could be detected with a sensitivity of 100% and specificity 95% and could be easily distinguished from parenchymal overdistension induced by PEEP or recruiting maneuvers, Their location was correctly identified in all cases, with a total delay of only three respiratory cycles. Conclusions. We created an EIT-based algorithm capable of detecting early signs of pneumothoraces in high-risk situations, which also identifies its location. It requires that the pneumothorax occurs or enlarges at least minimally during the monitoring period. Such detection was operator-free and in quasi real-time, opening opportunities for improving patient safety during mechanical ventilation.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Functional magnetic resonance imaging (fMRI) is currently one of the most widely used methods for studying human brain function in vivo. Although many different approaches to fMRI analysis are available, the most widely used methods employ so called ""mass-univariate"" modeling of responses in a voxel-by-voxel fashion to construct activation maps. However, it is well known that many brain processes involve networks of interacting regions and for this reason multivariate analyses might seem to be attractive alternatives to univariate approaches. The current paper focuses on one multivariate application of statistical learning theory: the statistical discrimination maps (SDM) based on support vector machine, and seeks to establish some possible interpretations when the results differ from univariate `approaches. In fact, when there are changes not only on the activation level of two conditions but also on functional connectivity, SDM seems more informative. We addressed this question using both simulations and applications to real data. We have shown that the combined use of univariate approaches and SDM yields significant new insights into brain activations not available using univariate methods alone. In the application to a visual working memory fMRI data, we demonstrated that the interaction among brain regions play a role in SDM`s power to detect discriminative voxels. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Cerebral toxoplasmosis is the most common cerebral mass lesion in AIDS patients in Brazil, and results in high mortality and morbidity, despite free access to HAART (highly active antiretroviral treatment). Molecular diagnosis based on conventional PCR (cnPCR) or real-time quantitative PCR (qrtPCR) has been indispensable for definitive diagnosis. We report here the evaluation of qrtPCR with blood and cerebrospinal fluid (CSF) samples from AIDS patients in Brazil. This prospective study was conducted for 2 years, analysing DNA samples extracted from 149 AIDS patients (98 blood and 51 CSF samples) with confirmed clinical and radiological diagnosis The laboratory diagnosis included cnPCR (with the B22/B23 primer set) and indirect immunofluorescence (IF). For qrtPCR, two primer sets were simultaneously designed based on described genes and using a 6-carboxyfluorescein dye-labelled TaqMan MGB (minor groove binder) probe One was Bug, which amplified a sequence from the B1 gene The other was the RETg, which amplified a PCR product of the 529 bp sequence. The overall cnPCR and qrtPCR results were positive results were observed in 33.6% (50) patients The sensitivities were 98% for cnPCR (B22/B23), and 86 and 98% for qrtPCR (B1Tg and RETg, respectively). Negative reactions were observed in 66 4% patients. The specificities were 97% for cnPCR and qrtPCR (B1Tg). and 88.8% for RETg These data show that RETg PCR is highly sensitive as it amplifies a repeat region with many copies; however, its specificity is lower than the other markers However, B1Tg PCR had good specificity, but lower sensitivity Among the patients, 20 had blood and CSF collected simultaneously Thus, their results permitted us to analyse and compare molecular, serological and clinical diagnosis for a better understanding of the different scenarios of laboratorial and clinical diagnosis. For nine patients with confirmed cerebral toxoplasmosis diagnosis, four scenarios were observed: (i) and (ii) negative molecular diagnosis for CSF and positive for blood with variable IF titres for the sera and CSF (negative or positive), (iii) positive molecular diagnosis with CSF and negative with blood, and (iv) positive molecular diagnosis in both samples. In the latter two situations, normally the IF titres in sera and CSF are variable. Other opportunistic infections were shown in 11 patients Despite the IF titres in sera and CSF being variable, all of them had negative molecular diagnosis for both samples qrtPCR allows for a rapid identification of Toxoplasma gondii DNA in patient samples; in a minority of cases discrepancies occur with the cnPCR.
Resumo:
Quartz Crystal Microbalance (QCM) was used to monitor the mass changes on a quartz crystal surface containing immobilized lectins that interacted with carbohydrates. The strategy for lectin immobilization was developed on the basis of a multilayer system composed of Au-cystamine-glutaraldehyde-lectin. Each step of the immobilization procedure was confirmed by FTIR analysis. The system was used to study the interactions of Concanavalin A (ConA) with maltose and Jacalin with Fetuin. The real-time binding of different concentrations of carbohydrate to the immobilized lectin was monitored by means of QCM measurements and the data obtained allowed for the construction of Langmuir isotherm curves. The association constants determined for the specific interactions analyzed here were (6.4 +/- 0.2) X 10(4) M-1 for Jacalin-Fetuin and (4.5 +/- 0.1) x 10(2) M-1 for ConA-maltose. These results indicate that the QCM constitutes a suitable method for the analysis of lectin-carbohydrate interactions, even when assaying low molecular mass ligands such as disaccharides. Published by Elsevier B.V.
Resumo:
Historically, the cure rate model has been used for modeling time-to-event data within which a significant proportion of patients are assumed to be cured of illnesses, including breast cancer, non-Hodgkin lymphoma, leukemia, prostate cancer, melanoma, and head and neck cancer. Perhaps the most popular type of cure rate model is the mixture model introduced by Berkson and Gage [1]. In this model, it is assumed that a certain proportion of the patients are cured, in the sense that they do not present the event of interest during a long period of time and can found to be immune to the cause of failure under study. In this paper, we propose a general hazard model which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage. The maximum-likelihood-estimation procedure is discussed. A simulation study analyzes the coverage probabilities of the asymptotic confidence intervals for the parameters. A real data set on children exposed to HIV by vertical transmission illustrates the methodology.