105 resultados para Module average case analysis
Resumo:
The influence of a possible nonzero chemical potential mu on the nature of dark energy is investigated by assuming that the dark energy is a relativistic perfect simple fluid obeying the equation of state, p=omega rho (omega < 0, constant). The entropy condition, S >= 0, implies that the possible values of omega are heavily dependent on the magnitude, as well as on the sign of the chemical potential. For mu > 0, the omega parameter must be greater than -1 (vacuum is forbidden) while for mu < 0 not only the vacuum but even a phantomlike behavior (omega <-1) is allowed. In any case, the ratio between the chemical potential and temperature remains constant, that is, mu/T=mu(0)/T(0). Assuming that the dark energy constituents have either a bosonic or fermionic nature, the general form of the spectrum is also proposed. For bosons mu is always negative and the extended Wien's law allows only a dark component with omega <-1/2, which includes vacuum and the phantomlike cases. The same happens in the fermionic branch for mu < 0. However, fermionic particles with mu > 0 are permitted only if -1
Resumo:
Context. Be stars undergo outbursts producing a circumstellar disk from the ejected material. The beating of non-radial pulsations has been put forward as a possible mechanism of ejection. Aims. We analyze the pulsational behavior of the early B0.5IVe star HD 49330 observed during the first CoRoT long run towards the Galactical anticenter (LRA1). This Be star is located close to the lower edge of the beta Cephei instability strip in the HR diagram and showed a 0.03 mag outburst during the CoRoT observations. It is thus an ideal case for testing the aforementioned hypothesis. Methods. We analyze the CoRoT light curve of HD 49330 using Fourier methods and non-linear least square fitting. Results. In this star, we find pulsation modes typical of beta Cep stars (p modes) and SPB stars (g modes) with amplitude variations along the run directly correlated with the outburst. These results provide new clues about the origin of the Be phenomenon as well as strong constraints on the seismic modelling of Be stars.
Resumo:
Aerosol samples were collected at a pasture site in the Amazon Basin as part of the project LBA-SMOCC-2002 (Large-Scale Biosphere-Atmosphere Experiment in Amazonia - Smoke Aerosols, Clouds, Rainfall and Climate: Aerosols from Biomass Burning Perturb Global and Regional Climate). Sampling was conducted during the late dry season, when the aerosol composition was dominated by biomass burning emissions, especially in the submicron fraction. A 13-stage Dekati low-pressure impactor (DLPI) was used to collect particles with nominal aerodynamic diameters (D(p)) ranging from 0.03 to 0.10 mu m. Gravimetric analyses of the DLPI substrates and filters were performed to obtain aerosol mass concentrations. The concentrations of total, apparent elemental, and organic carbon (TC, EC(a), and OC) were determined using thermal and thermal-optical analysis (TOA) methods. A light transmission method (LTM) was used to determine the concentration of equivalent black carbon (BC(e)) or the absorbing fraction at 880 nm for the size-resolved samples. During the dry period, due to the pervasive presence of fires in the region upwind of the sampling site, concentrations of fine aerosols (D(p) < 2.5 mu m: average 59.8 mu g m(-3)) were higher than coarse aerosols (D(p) > 2.5 mu m: 4.1 mu g m(-3)). Carbonaceous matter, estimated as the sum of the particulate organic matter (i.e., OC x 1.8) plus BC(e), comprised more than 90% to the total aerosol mass. Concentrations of EC(a) (estimated by thermal analysis with a correction for charring) and BC(e) (estimated by LTM) averaged 5.2 +/- 1.3 and 3.1 +/- 0.8 mu g m(-3), respectively. The determination of EC was improved by extracting water-soluble organic material from the samples, which reduced the average light absorption Angstrom exponent of particles in the size range of 0.1 to 1.0 mu m from >2.0 to approximately 1.2. The size-resolved BC(e) measured by the LTM showed a clear maximum between 0.4 and 0.6 mu m in diameter. The concentrations of OC and BC(e) varied diurnally during the dry period, and this variation is related to diurnal changes in boundary layer thickness and in fire frequency.
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
We present rigorous upper and lower bounds for the zero-momentum gluon propagator D(0) of Yang-Mills theories in terms of the average value of the gluon field. This allows us to perform a controlled extrapolation of lattice data to infinite volume, showing that the infrared limit of the Landau-gauge gluon propagator in SU(2) gauge theory is finite and nonzero in three and in four space-time dimensions. In the two-dimensional case, we find D(0)=0, in agreement with Maas. We suggest an explanation for these results. We note that our discussion is general, although we apply our analysis only to pure gauge theory in the Landau gauge. Simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Background: Identifying local similarity between two or more sequences, or identifying repeats occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding such fragments while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. Results: The filter TUIUIU that we introduce in this paper provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment or repeats inference method, eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate repeat. It consists in the verification of several strong necessary conditions that can be checked in a fast way. We implemented three versions of the filter. The first is simply a straightforward extension to the case of multiple sequences of an application of conditions already existing in the literature. The second uses a stronger condition which, as our results show, enable to filter sensibly more with negligible (if any) additional time. The third version uses an additional condition and pushes the sensibility of the filter even further with a non negligible additional time in many circumstances; our experiments show that it is particularly useful with large error rates. The latter version was applied as a preprocessing of a multiple alignment tool, obtaining an overall time (filter plus alignment) on average 63 and at best 530 times smaller than before (direct alignment), with in most cases a better quality alignment. Conclusion: To the best of our knowledge, TUIUIU is the first filter designed for multiple repeats and for dealing with error rates greater than 10% of the repeats length.
Resumo:
Background: Cancer shows a great diversity in its clinical behavior which cannot be easily predicted using the currently available clinical or pathological markers. The identification of pathways associated with lymph node metastasis (N+) and recurrent head and neck squamous cell carcinoma (HNSCC) may increase our understanding of the complex biology of this disease. Methods: Tumor samples were obtained from untreated HNSCC patients undergoing surgery. Patients were classified according to pathologic lymph node status (positive or negative) or tumor recurrence (recurrent or non-recurrent tumor) after treatment (surgery with neck dissection followed by radiotherapy). Using microarray gene expression, we screened tumor samples according to modules comprised by genes in the same pathway or functional category. Results: The most frequent alterations were the repression of modules in negative lymph node (N0) and in non-recurrent tumors rather than induction of modules in N+ or in recurrent tumors. N0 tumors showed repression of modules that contain cell survival genes and in non-recurrent tumors cell-cell signaling and extracellular region modules were repressed. Conclusions: The repression of modules that contain cell survival genes in N0 tumors reinforces the important role that apoptosis plays in the regulation of metastasis. In addition, because tumor samples used here were not microdissected, tumor gene expression data are represented together with the stroma, which may reveal signaling between the microenvironment and tumor cells. For instance, in non-recurrent tumors, extracellular region module was repressed, indicating that the stroma and tumor cells may have fewer interactions, which disable metastasis development. Finally, the genes highlighted in our analysis can be implicated in more than one pathway or characteristic, suggesting that therapeutic approaches to prevent tumor progression should target more than one gene or pathway, specially apoptosis and interactions between tumor cells and the stroma.
Resumo:
In this work a downscaled multicommuted flow injection analysis setup for photometric determination is described. The setup consists of a flow system module and a LED based photometer, with a total internal volume of about 170 mu L The system was tested by developing an analytical procedure for the photometric determination of iodate in table salt using N,N-diethyl-henylenediamine (DPD) as the chromogenic reagent. Accuracy was accessed by applying the paired r-test between results obtained using the proposed procedure and a reference method, and no significant difference at the 95% confidence level was observed. Other profitable features, such as a low reagent consumption of 7.3 mu g DPD per determination: a linear response ranging from 0.1 up to 3.0 m IO(3)(-), a relative standard deviation of 0.9% (n = 11) for samples containing 0.5 m IO(3)(-), a detection limit of 17 mu g L(-1) IO(3)(-), a sampling throughput of 117 determination per hour, and a waste generation 600 mu L per determination, were also achieved. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The physiological and perceptual demands together with match notation of a four-set tennis match were studied in two elite professional players during the preparation for the 2008 Davis Cup. The design of this case report is unique in that it is the first to describe the demands of prolonged match-play (197 min) over four sets in ecologically valid conditions. The variables measured before and after each set included blood lactate and glucose concentrations, body mass, and perception of effort. Stroke count for each rally and heart rate were recorded during each set while salivary cortisol concentration was determined before and after the match. The rally length decreased as the match progressed. The results showed significant physiological stress, with each player losing greater than 2.5% of body mass (as fluid) and having elevated salivary cortisol concentrations after the match. Heart rate and perception of effort were also increased following each set indicating increasing stress. However, blood lactate decreased following the fourth set while blood glucose was maintained. The results also suggest that elite players may adjust work rates or tactics to cope with the increased perception of effort. This report shows that four sets of tennis are associated with increasing stress and fatigue.
Resumo:
Mixed martial arts (MMA) have become a fast-growing worldwide expansion of martial arts competition, requiring high level of skill, physical conditioning, and strategy, and involving a synthesis of combat while standing or on the ground. This study quantified the effort-pause ratio (EP), and classified effort segments of stand-up or groundwork development to identify the number of actions performed per round in MMA matches. 52 MMA athletes participated in the study (M age = 24 yr., SD = 5; average experience in MMA = 5 yr., SD = 3). A one-way analysis of variance with repeated measurements was conducted to compare the type of action across the rounds. A chi-squared test was applied across the percentages to compare proportions of different events. Only one significant difference (p < .05) was observed among rounds: time in groundwork of low intensity was longer in the second compared to the third round. When the interval between rounds was not considered, the EP ratio (between high-intensity effort to low-intensity effort plus pauses) WE S 1:2 to 1:4. This ratio is between ratios typical for judo, wrestling, karate, and taekwondo and reflects the combination of ground and standup techniques. Most of the matches ended in the third round, involving high-intensity actions, predominantly executed during groundwork combat.
Resumo:
In the first paper of this paper (Part I), conditions were presented for the gas cleaning technological route for environomic optimisation of a cogeneration system based in a thermal cycle with municipal solid waste incineration. In this second part, an environomic analysis is presented of a cogeneration system comprising a combined cycle composed of a gas cycle burning natural gas with a heat recovery steam generator with no supplementary burning and a steam cycle burning municipal solid wastes (MSW) to which will be added a pure back pressure steam turbine (another one) of pure condensation. This analysis aims to select, concerning some scenarios, the best atmospheric pollutant emission control routes (rc) according to the investment cost minimisation, operation and social damage criteria. In this study, a comparison is also performed with the results obtained in the Case Study presented in Part I. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents studies of cases in power systems by Sensitivity Analysis (SA) oriented by Optimal Power Flow (OPF) problems in different operation scenarios. The studies of cases start from a known optimal solution obtained by OPF. This optimal solution is called base case, and from this solution new operation points may be evaluated by SA when perturbations occur in the system. The SA is based on Fiacco`s Theorem and has the advantage of not be an iterative process. In order to show the good performance of the proposed technique tests were carried out on the IEEE 14, 118 and 300 buses systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The taxonomy of the N(2)-fixing bacteria belonging to the genus Bradyrhizobium is still poorly refined, mainly due to conflicting results obtained by the analysis of the phenotypic and genotypic properties. This paper presents an application of a method aiming at the identification of possible new clusters within a Brazilian collection of 119 Bradryrhizobium strains showing phenotypic characteristics of B. japonicum and B. elkanii. The stability was studied as a function of the number of restriction enzymes used in the RFLP-PCR analysis of three ribosomal regions with three restriction enzymes per region. The method proposed here uses Clustering algorithms with distances calculated by average-linkage clustering. Introducing perturbations using sub-sampling techniques makes the stability analysis. The method showed efficacy in the grouping of the species B. japonicum and B. elkanii. Furthermore, two new clusters were clearly defined, indicating possible new species, and sub-clusters within each detected cluster. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
An analysis of geomorphic system`s response to change in human and natural drivers in some areas within the Rio de la Plata basin is presented The aim is to determine whether an acceleration of geomorphic processes has taken place in recent years and, if so, to what extent it is due to natural (climate) or human (land-use) drivers Study areas of different size, socio-economic and geomorphic conditions have been selected: the Rio de la Plata estuary and three sub-basins within its watershed Sediment cores were extracted and dated ((210)Pb) to determine sedimentation rates since the end of the 19th century. Rates were compared with time series on rainfall as well as human drivers such as population, GDP, livestock load, crop area, energy consumption or cement consumption, all of them related to human capacity to disturb land surface Data on river discharge were also gathered Results obtained indicate that sedimentation rates during the last century have remained essentially constant in a remote Andean basin, whereas they show important increases in the other two, particularly one located by the Sao Paulo metropolitan area Rates in the estuary are somewhere in between It appears that there is an intensification of denudation/sedimentation processes within the basin. Rainfall remained stable or varied very slightly during the period analysed and does not seem to explain increases of sedimentation rates observed. Human drivers, particularly those more directly related to capacity to disturb land surface (GDP, energy or cement consumption) show variations that suggest human forcing is a more likely explanation for the observed change in geomorphic processes It appears that a marked increase in denudation, of a ""technological"" nature, is taking place in this basin and leading to an acceleration of sediment supply This is coherent with similar increases observed in other regions (C) 2010 Elsevier B V All rights reserved