929 resultados para Maximum Power Point Tracking
Resumo:
We study the effect of strong heterogeneities on the fracture of disordered materials using a fiber bundle model. The bundle is composed of two subsets of fibers, i.e. a fraction 0 ≤ α ≤ 1 of fibers is unbreakable, while the remaining 1 - α fraction is characterized by a distribution of breaking thresholds. Assuming global load sharing, we show analytically that there exists a critical fraction of the components αc which separates two qualitatively diferent regimes of the system: below αc the burst size distribution is a power law with the usual exponent Ƭ= 5/2, while above αc the exponent switches to a lower value Ƭ = 9/4 and a cutoff function occurs with a diverging characteristic size. Analyzing the macroscopic response of the system we demonstrate that the transition is conditioned to disorder distributions where the constitutive curve has a single maximum and an inflexion point defining a novel universality class of breakdown phenomena
Resumo:
We previously reported that nuclear grade assignment of prostate carcinomas is subject to a cognitive bias induced by the tumor architecture. Here, we asked whether this bias is mediated by the non-conscious selection of nuclei that "match the expectation" induced by the inadvertent glance at the tumor architecture. 20 pathologists were asked to grade nuclei in high power fields of 20 prostate carcinomas displayed on a computer screen. Unknown to the pathologists, each carcinoma was shown twice, once before a background of a low grade, tubule-rich carcinoma and once before the background of a high grade, solid carcinoma. Eye tracking allowed to identify which nuclei the pathologists fixated during the 8 second projection period. For all 20 pathologists, nuclear grade assignment was significantly biased by tumor architecture. Pathologists tended to fixate on bigger, darker, and more irregular nuclei when those were projected before kigh grade, solid carcinomas than before low grade, tubule-rich carcinomas (and vice versa). However, the morphometric differences of the selected nuclei accounted for only 11% of the architecture-induced bias, suggesting that it can only to a small part be explained by the unconscious fixation on nuclei that "match the expectation". In conclusion, selection of « matching nuclei » represents an unconscious effort to vindicate the gravitation of nuclear grades towards the tumor architecture.
Resumo:
The analysis of the multiantenna capacity in the high-SNR regime has hitherto focused on the high-SNR slope (or maximum multiplexing gain), which quantifies the multiplicative increase as function of the number of antennas. This traditional characterization is unable to assess the impact of prominent channel features since, for a majority of channels, the slope equals the minimum of the number of transmit and receive antennas. Furthermore, a characterization based solely on the slope captures only the scaling but it has no notion of the power required for a certain capacity. This paper advocates a more refined characterization whereby, as function of SNRjdB, the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc, resides in the zero-order term or power offset. The power offset, for which we find insightful closed-form expressions, is shown to play a chief role for SNR levels of practical interest.
Resumo:
This paper formulates power allocation policies that maximize the region of mutual informationsachievable in multiuser downlink OFDM channels. Arbitrary partitioning ofthe available tones among users and arbitrary modulation formats, possibly different forevery user, are considered. Two distinct policies are derived, respectively for slow fadingchannels tracked instantaneously by the transmitter and for fast fading channels knownonly statistically thereby. With instantaneous channel tracking, the solution adopts theform of a multiuser mercury/waterfilling procedure that generalizes the single-user mercury/waterfilling introduced in [1, 2]. With only statistical channel information, in contrast,the mercury/waterfilling interpretation is lost. For both policies, a number of limitingregimes are explored and illustrative examples are provided.
Resumo:
We present a novel approach to N-person bargaining, based on the idea thatthe agreement reached in a negotiation is determined by how the directconflict resulting from disagreement would be resolved. Our basic buildingblock is the disagreement function, which maps each set of feasible outcomesinto a disagreement point. Using this function and a weak axiom basedon individual rationality we reach a unique solution: the agreement inthe shadow of conflict, ASC. This agreement may be construed as the limitof a sequence of partial agreements, each of which is reached as a functionof the parties relative power. We examine the connection between ASC andasymmetric Nash solutions. We show the connection between the power ofthe parties embodied in the ASC solution and the bias in the SWF thatwould select ASC as an asymmetric Nash solution.
Resumo:
This school in the course of Marketing Business Management and specifically Entrepr This school in the course of Marketing Business Management and specifically Entrepreneurship in the discipline of Simulation - Games Marketing year was accordingly for the creation of a company in the computer business in business online simulator called Marketplace, in order to put into practice all the theoretical knowledge acquired during all previous semesters. This platform we were confronted with decisions in eight quarters corresponding 4 every year , in order to encourage learning in a practical way, a virtual and dynamic environment. Every quarter acareados with well organized tasks taking as a reference point defined strategies such as market research analysis, branding , store management after its creation , development of the policy of the 4Ps , identifying opportunities , monitoring of finances and invest heavily . All quarters were subjected decisions and are then given the results , such as: market performance , financial performance, investments in the future , the "health" of the company 's marketing efficiency then analyzed by our company , teaching and also by competition Balanced Scorecard ie , semi-annual and cumulative . For the start of activities it was awarded the 1st year a total of 2,000,000, corresponding to 500,000 out of 4 first quarter , and 5,000,000 in the fifth quarter in a total of 7,000,000 . The capital invested was used to buy market research, opening sales offices , create brands , contract sales force , advertise products created and perform activity R & D in order to make a profit and become self- sufficient to guarantee the payment of principal invested to headquarters ( Corporate Headquarters ) .
Resumo:
Three-dimensional imaging for the quantification of myocardial motion is a key step in the evaluation of cardiac disease. A tagged magnetic resonance imaging method that automatically tracks myocardial displacement in three dimensions is presented. Unlike other techniques, this method tracks both in-plane and through-plane motion from a single image plane without affecting the duration of image acquisition. A small z-encoding gradient is subsequently added to the refocusing lobe of the slice-selection gradient pulse in a slice following CSPAMM acquisition. An opposite polarity z-encoding gradient is added to the orthogonal tag direction. The additional z-gradients encode the instantaneous through plane position of the slice. The vertical and horizontal tags are used to resolve in-plane motion, while the added z-gradients is used to resolve through-plane motion. Postprocessing automatically decodes the acquired data and tracks the three-dimensional displacement of every material point within the image plane for each cine frame. Experiments include both a phantom and in vivo human validation. These studies demonstrate that the simultaneous extraction of both in-plane and through-plane displacements and pathlines from tagged images is achievable. This capability should open up new avenues for the automatic quantification of cardiac motion and strain for scientific and clinical purposes.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
ABSTRACT The removal of thick layers of soil under native scrubland (Cerrado) on the right bank of the Paraná River in Selvíria (State of Mato Grosso do Sul, Brazil) for construction of the Ilha Solteira Hydroelectric Power Plant caused environmental damage, affecting the revegetation process of the stripped soil. Over the years, various kinds of land use and management systems have been tried, and the aim of this study was to assess the effects of these attempts to restore the structural quality of the soil. The experiment was conducted considering five treatments and thirty replications. The following treatments were applied: stripped soil without anthropic intervention and total absence of plant cover; stripped soil treated with sewage sludge and planted to eucalyptus and grass a year ago; stripped soil developing natural secondary vegetation (capoeira) since 1969; pastureland since 1978, replacing the native vegetation; and soil under native vegetation (Cerrado). In the 0.00-0.20 m layer, the soil was chemically characterized for each experimental treatment. A 30-point sampling grid was used to assess soil porosity and bulk density, and to assess aggregate stability in terms of mean weight diameter (MWD) and geometric mean diameter (GMD). Aggregate stability was also determined using simulated rainfall. The results show that using sewage sludge incorporated with a rotary hoe improved the chemical fertility of the soil and produced more uniform soil pore size distribution. Leaving the land to develop secondary vegetation or turning it over to pastureland produced an intermediate level of structural soil quality, and these two treatments produced similar results. Stripped soil without anthropic intervention was of the lowest quality, with the lowest values for cation exchange capacity (CEC) and macroporosity, as well as the highest values of soil bulk density and percentage of aggregates with diameter size <0.50 mm, corroborated by its lower organic matter content. However, the percentage of larger aggregates was higher in the native vegetation treatment, which boosted MWD and GMD values. Therefore, assessment of some land use and management systems show that even decades after their implementation to mitigate the degenerative effects resulting from the installation of the Hydroelectric Plant, more efficient approaches are still required to recover the structural quality of the soil.
Resumo:
Three-dimensional imaging and quantification of myocardial function are essential steps in the evaluation of cardiac disease. We propose a tagged magnetic resonance imaging methodology called zHARP that encodes and automatically tracks myocardial displacement in three dimensions. Unlike other motion encoding techniques, zHARP encodes both in-plane and through-plane motion in a single image plane without affecting the acquisition speed. Postprocessing unravels this encoding in order to directly track the 3-D displacement of every point within the image plane throughout an entire image sequence. Experimental results include a phantom validation experiment, which compares zHARP to phase contrast imaging, and an in vivo study of a normal human volunteer. Results demonstrate that the simultaneous extraction of in-plane and through-plane displacements from tagged images is feasible.
Resumo:
Climate refers to the long-term course or condition of weather, usually over a time scale of decades and longer. It has been documented that our global climate is changing (IPCC 2007, Copenhagen Diagnosis 2009), and Iowa is no exception. In Iowa, statistically significant changes in our precipitation, streamflow, nighttime minimum temperatures, winter average temperatures, and dewpoint humidity readings have occurred during the past few decades. Iowans are already living with warmer winters, longer growing seasons, warmer nights, higher dew-point temperatures, increased humidity, greater annual streamflows, and more frequent severe precipitation events (Fig. 1-1) than were prevalent during the past 50 years. Some of the impacts of these changes could be construed as positive, and some are negative, particularly the tendency for greater precipitation events and flooding. In the near-term, we may expect these trends to continue as long as climate change is prolonged and exacerbated by increasing greenhouse gas emissions globally from the use of fossil fuels and fertilizers, the clearing of land, and agricultural and industrial emissions. This report documents the impacts of changing climate on Iowa during the past 50 years. It seeks to answer the question, “What are the impacts of climate change in Iowa that have been observed already?” And, “What are the effects on public health, our flora and fauna, agriculture, and the general economy of Iowa?”
Resumo:
A new paint testing device was built to determine the resistance of paints to darkening due to road grime being tracked onto them. The device consists of a tire rotating on a sample drum. Soil was applied to the tire and then tracked onto paint samples which were attached to the drum. A colorimeter was used to measure the lightness of the paints after being tracked. Lightness is measured from 0 (absolute black) to 100 (absolute white). Four experiments were run to determine the optimum time length to track a sample, the reproducibility, the effects of different soils, and the maximum acceptable level for darkening of a paint. The following conclusions were reached: 1) the optimum tracking time was 10 minutes; 2) the reproducibility had a standard deviation of 1.5 lightness units; 3) different soils did not have a large effect on the amount of darkening on the paints; 4) a maximum acceptable darkness could not be established based on the limited amount of data; and 5) a correlation exists between the paints which were darkening in the field and the paints which were turning the darkest on the tracking wheel.
Resumo:
When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries.
Resumo:
Positive selection is widely estimated from protein coding sequence alignments by the nonsynonymous-to-synonymous ratio omega. Increasingly elaborate codon models are used in a likelihood framework for this estimation. Although there is widespread concern about the robustness of the estimation of the omega ratio, more efforts are needed to estimate this robustness, especially in the context of complex models. Here, we focused on the branch-site codon model. We investigated its robustness on a large set of simulated data. First, we investigated the impact of sequence divergence. We found evidence of underestimation of the synonymous substitution rate for values as small as 0.5, with a slight increase in false positives for the branch-site test. When dS increases further, underestimation of dS is worse, but false positives decrease. Interestingly, the detection of true positives follows a similar distribution, with a maximum for intermediary values of dS. Thus, high dS is more of a concern for a loss of power (false negatives) than for false positives of the test. Second, we investigated the impact of GC content. We showed that there is no significant difference of false positives between high GC (up to similar to 80%) and low GC (similar to 30%) genes. Moreover, neither shifts of GC content on a specific branch nor major shifts in GC along the gene sequence generate many false positives. Our results confirm that the branch-site is a very conservative test.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.