960 resultados para Real Electricity Markets Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e. g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. Results: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/similar to vpopovic/research/ Conclusion: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El projecte es desenvoluparà en format web i seguint l'estàndar J2EE, dins del què podríem anomenar MVC1, de manera que la seva consulta es pugui fer en línia aprofitant les noves tecnologies i l'accés a internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a 2006 national report to the EMCDDA, using 2005 data. It is compiled by the Reitox national focal point and covers epidemiology, policing, strategy, drugs markets, drug-related infectious diseases, drug-related death and problem drug use in Norway.This resource was contributed by The National Documentation Centre on Drug Use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Aspergillus species are the main pathogens causing invasive fungal infections but the prevalence of other mould species is rising. Resistance to antifungals among these new emerging pathogens presents a challenge for managing of infections. Conventional susceptibility testing of non-Aspergillus species is laborious and often difficult to interpret. We evaluated a new method for real-time susceptibility testing of moulds based on their of growth-related heat production.Methods: Laboratory and clinical strains of Mucor spp. (n = 4), Scedoporium spp. (n = 4) and Fusarium spp. (n = 5) were used. Conventional MIC was determined by microbroth dilution. Isothermal microcalorimetry was performed at 37 C using Sabouraud dextrose broth (SDB) inoculated with 104 spores/ml (determined by microscopical enumeration). SDB without antifungals was used for evaluation of growth characteristics. Detection time was defined as heat flow exceeding 10 lW. For susceptibility testing serial dilutions of amphotericin B, voriconazole, posaconazole and caspofungin were used. The minimal heat inhibitory concentration (MHIC) was defined as the lowest antifungal concentration, inhbiting 50% of the heat produced by the growth control at 48 h or at 24 h for Mucor spp. Susceptibility tests were performed in duplicate.Results: Tested mould genera had distinctive heat flow profiles with a median detection time (range) of 3.4 h (1.9-4.1 h) for Mucor spp, 11.0 h (7.1-13.7 h) for Fusarium spp and 29.3 h (27.4-33.0 h) for Scedosporium spp. Graph shows heat flow (in duplicate) of one representative strain from each genus (dashed line marks detection limit). Species belonging to the same genus showed similar heat production profiles. Table shows MHIC and MIC ranges for tested moulds and antifungals.Conclusions: Microcalorimetry allowed rapid detection of growth of slow-growing species, such as Fusarium spp. and Scedosporium spp. Moreover, microcalorimetry offers a new approach for antifungal susceptibility testing of moulds, correlating with conventional MIC values. Interpretation of calorimetric susceptibility data is easy and real-time data on the effect of different antifungals on the growth of the moulds is additionally obtained. This method may be used for investigation of different mechanisms of action of antifungals, new substances and drug-drug combinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of four essays in equilibrium asset pricing. The main topic is investors' heterogeneity: I investigates the equilibrium implications for the financial markets when investors have different attitudes toward risk. The first chapter studies why expected risk and remuneration on the aggregate market are negatively related, even if intuition and standard theory suggest a positive relation. I show that the negative trade-off can obtain in equilibrium if investors' beliefs about economic fundamentals are procyclically biased and the market Sharpe ratio is countercyclical. I verify that such conditions hold in the real markets and I find empirical support for the risk-return dynamics predicted by the model. The second chapter consists of two essays. The first essay studies how het¬erogeneity in risk preferences interacts with other sources of heterogeneity and how this affects asset prices in equilibrium. Using perceived macroeconomic un¬certainty as source of heterogeneity, the model helps to explain some patterns of financial returns, even if heterogeneity is small as suggested by survey data. The second essay determines conditions such that equilibrium prices have analytical solutions when investors have heterogeneous risk attitudes and macroeconomic fundamentals feature latent uncertainty. This approach provides additional in-sights to the previous literature where models require numerical solutions. The third chapter studies why equity claims (i.e. assets paying a single future dividend) feature premia and risk decreasing with the horizon, even if standard models imply the opposite shape. I show that labor relations helps to explain the puzzle. When workers have bargaining power to exploit partial income insurance within the firm, wages are smoother and dividends are riskier than in a standard economy. Distributional risk among workers and shareholders provides a rationale to the equity short-term risk, which leads to downward sloping term structures of premia and risk for equity claim. Résumé Cette thèse se compose de quatre essais dans l'évaluation des actifs d'équilibre. Le sujet principal est l'hétérogénéité des investisseurs: J'étudie les implications d'équilibre pour les marchés financiers où les investisseurs ont des attitudes différentes face au risque. Le première chapitre étudie pourquoi attendus risque et la rémunération sur le marché global sont liées négativement, même si l'intuition et la théorie standard suggèrent une relation positive. Je montre que le compromis négatif peut obtenir en équilibre si les croyances des investisseurs sur les fondamentaux économiques sont procyclique biaisées et le ratio de Sharpe du marché est anticyclique. Je vérifier que ces conditions sont réalisées dans les marchés réels et je trouve un appui empirique à la dynamique risque-rendement prédites par le modèle. Le deuxième chapitre se compose de deux essais. Le première essai étudie com¬ment hétérogénéité dans les préférences de risque inter agit avec d'autres sources d'hétérogénéité et comment cela affecte les prix des actifs en équilibre. Utili¬sation de l'incertitude macroéconomique perù comme source d'hétérogénéité, le modèle permet d'expliquer certaines tendances de rendements financiers, même si l'hétérogénéité est faible comme suggéré par les données d'enquête. Le deuxième essai détermine des conditions telles que les prix d'équilibre disposer de solutions analytiques lorsque les investisseurs ont des attitudes des risques hétérogènes et les fondamentaux macroéconomiques disposent d'incertitude latente. Cette approche fournit un éclairage supplémentaire à la littérature antérieure où les modèles nécessitent des solutions numériques. Le troisième chapitre étudie pourquoi les equity-claims (actifs que paient un seul dividende futur) ont les primes et le risque décroissante avec l'horizon, mme si les modèles standards impliquent la forme opposée. Je montre que les relations de travail contribue à expliquer l'énigme. Lorsque les travailleurs ont le pouvoir de négociation d'exploiter assurance revenu partiel dans l'entreprise, les salaires sont plus lisses et les dividendes sont plus risqués que dans une économie standard. Risque de répartition entre les travailleurs et les actionnaires fournit une justification à le risque à court terme, ce qui conduit à des term-structures en pente descendante des primes et des risques pour les equity-claims.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, compositional data has been identified with closed data, and the simplex has been considered as the natural sample space of this kind of data. In our opinion, the emphasis on the constrained nature ofcompositional data has contributed to mask its real nature. More crucial than the constraining property of compositional data is the scale-invariant property of this kind of data. Indeed, when we are considering only few parts of a full composition we are not working with constrained data but our data are still compositional. We believe that it is necessary to give a more precisedefinition of composition. This is the aim of this oral contribution

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This analysis was stimulated by the real data analysis problem of householdexpenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that tryto add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spendingexcluding alcohol/tobacco similar for teetotal and non-teetotal households?In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than onecomponent, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durableswithin the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small.While this analysis is based on around economic data, the ideas carry over tomany other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The statistical analysis of compositional data is commonly used in geological studies.As is well-known, compositions should be treated using logratios of parts, which aredifficult to use correctly in standard statistical packages. In this paper we describe thenew features of our freeware package, named CoDaPack, which implements most of thebasic statistical methods suitable for compositional data. An example using real data ispresented to illustrate the use of the package

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The automatic interpretation of conventional traffic signs is very complex and time consuming. The paper concerns an automatic warning system for driving assistance. It does not interpret the standard traffic signs on the roadside; the proposal is to incorporate into the existing signs another type of traffic sign whose information will be more easily interpreted by a processor. The type of information to be added is profuse and therefore the most important object is the robustness of the system. The basic proposal of this new philosophy is that the co-pilot system for automatic warning and driving assistance can interpret with greater ease the information contained in the new sign, whilst the human driver only has to interpret the "classic" sign. One of the codings that has been tested with good results and which seems to us easy to implement is that which has a rectangular shape and 4 vertical bars of different colours. The size of these signs is equivalent to the size of the conventional signs (approximately 0.4 m2). The colour information from the sign can be easily interpreted by the proposed processor and the interpretation is much easier and quicker than the information shown by the pictographs of the classic signs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents recent WMR (wheeled mobile robot) navigation experiences using local perception knowledge provided by monocular and odometer systems. A local narrow perception horizon is used to plan safety trajectories towards the objective. Therefore, monocular data are proposed as a way to obtain real time local information by building two dimensional occupancy grids through a time integration of the frames. The path planning is accomplished by using attraction potential fields, while the trajectory tracking is performed by using model predictive control techniques. The results are faced to indoor situations by using the lab available platform consisting in a differential driven mobile robot

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geographical Information System (GIS) is a tool that has recently been applied to better understand spatial disease distributions. Using meteorological, social, sanitation, mollusc distribution data and remote sensing variables, this study aimed to further develop the GIS technology by creating a model for the spatial distribution of schistosomiasis and to apply this model to an area with rural tourism in the Brazilian state of Minas Gerais (MG). The Estrada Real, covering about 1,400 km, is the largest and most important Brazilian tourism project, involving 163 cities in MG with different schistosomiasis prevalence rates. The model with three variables showed a R² = 0.34, with a standard deviation of risk estimated adequate for public health needs. The main variables selected for modelling were summer vegetation, summer minimal temperature and winter minimal temperature. The results confirmed the importance of Remote Sensing data and the valuable contribution of GIS in identifying priority areas for intervention in tourism regions which are endemic to schistosomiasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the validity and reliability of a sequential "Run-Bike-Run" test (RBR) in age-group triathletes. Eight Olympic distance (OD) specialists (age 30.0 ± 2.0 years, mass 75.6 ± 1.6 kg, run VO2max 63.8 ± 1.9 ml· kg(-1)· min(-1), cycle VO2peak 56.7 ± 5.1 ml· kg(-1)· min(-1)) performed four trials over 10 days. Trial 1 (TRVO2max) was an incremental treadmill running test. Trials 2 and 3 (RBR1 and RBR2) involved: 1) a 7-min run at 15 km· h(-1) (R1) plus a 1-min transition to 2) cycling to fatigue (2 W· kg(-1) body mass then 30 W each 3 min); 3) 10-min cycling at 3 W· kg(-1) (Bsubmax); another 1-min transition and 4) a second 7-min run at 15 km· h(-1) (R2). Trial 4 (TT) was a 30-min cycle - 20-min run time trial. No significant differences in absolute oxygen uptake (VO2), heart rate (HR), or blood lactate concentration ([BLA]) were evidenced between RBR1 and RBR2. For all measured physiological variables, the limits of agreement were similar, and the mean differences were physiologically unimportant, between trials. Low levels of test-retest error (i.e. ICC <0.8, CV<10%) were observed for most (logged) measurements. However [BLA] post R1 (ICC 0.87, CV 25.1%), [BLA] post Bsubmax (ICC 0.99, CV 16.31) and [BLA] post R2 (ICC 0.51, CV 22.9%) were least reliable. These error ranges may help coaches detect real changes in training status over time. Moreover, RBR test variables can be used to predict discipline specific and overall TT performance. Cycle VO2peak, cycle peak power output, and the change between R1 and R2 (deltaR1R2) in [BLA] were most highly related to overall TT distance (r = 0.89, p < 0. 01; r = 0.94, p < 0.02; r = 0.86, p < 0.05, respectively). The percentage of TR VO2max at 15 km· h(-1), and deltaR1R2 HR, were also related to run TT distance (r = -0.83 and 0.86, both p < 0.05).