866 resultados para Power Distribution Poles


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project is funded by European Research Council in FP7; grant no 259328, 2010 and EPSRC grant no EP/K006428/1, 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project is funded by European Research Council in FP7; grant no 259328, 2010 and EPSRC grant no EP/K006428/1, 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’auteur qui appose son nom à une publication universitaire sera reconnu pour sa contribution à la recherche et devra également en assumer la responsabilité. Il existe divers types d’agencements pouvant être utilisés afin de nommer les auteurs et souligner l’ampleur de leur contribution à ladite recherche. Par exemple, les auteurs peuvent être nommés en ordre décroissant selon l’importance de leurs contributions, ce qui permet d’allouer davantage de mérite et de responsabilité aux premiers auteurs (à l’instar des sciences de la santé) ou bien les individus peuvent être nommés en ordre alphabétique, donnant une reconnaissance égale à tous (tel qu’on le note dans certains domaines des sciences sociales). On observe aussi des pratiques émergeant de certaines disciplines ou des champs de recherche (tel que la notion d’auteur correspondant, ou directeur de recherche nommé à la fin de la liste d’auteurs). En science de la santé, lorsque la recherche est de nature multidisciplinaire, il existe différentes normes et pratiques concernant la distribution et l’ordre de la signature savante, ce qui peut donner lieu à des désaccords, voire à des conflits au sein des équipes de recherche. Même si les chercheurs s’entendent pour dire que la signature savante devrait être distribué de façon ‘juste’, il n’y a pas de consensus sur ce que l’on qualifie de ‘juste’ dans le contexte des équipes de recherche multidisciplinaire. Dans cette thèse, nous proposons un cadre éthique pour la distribution juste de la signature savante dans les équipes multidisciplinaires en sciences de la santé. Nous présentons une critique de la documentation sur la distribution de la signature savante en recherche. Nous analysons les enjeux qui peuvent entraver ou compliquer une distribution juste de la signature savante tels que les déséquilibres de pouvoir, les conflits d’intérêts et la diversité de cultures disciplinaires. Nous constatons que les normes internationales sont trop vagues; par conséquent, elles n’aident pas les chercheurs à gérer la complexité des enjeux concernant la distribution de la signature savante. Cette limitation devient particulièrement importante en santé mondiale lorsque les chercheurs provenant de pays développés collaborent avec des chercheurs provenant de pays en voie de développement. Afin de créer un cadre conceptuel flexible en mesure de s’adapter à la diversité des types de recherche multidisciplinaire, nous proposons une approche influencée par le Contractualisme de T.M. Scanlon. Cette approche utilise le respect mutuel et la force normative de la raison comme fondation, afin de justifier l’application de principes éthiques. Nous avons ainsi développé quatre principes pour la distribution juste de la signature savante en recherche: le mérite, la juste reconnaissance, la transparence et la collégialité. Enfin, nous proposons un processus qui intègre une taxonomie basée sur la contribution, afin de délimiter les rôles de chacun dans le projet de recherche. Les contributions peuvent alors être mieux comparées et évaluées pour déterminer l’ordre de la signature savante dans les équipes de recherche multidisciplinaire en science de la santé.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.

In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.

Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.

Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.

Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.

To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.

The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.

This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tropical Cyclones are a continuing threat to life and property. Willoughby (2012) found that a Pareto (power-law) cumulative distribution fitted to the most damaging 10% of US hurricane seasons fit their impacts well. Here, we find that damage follows a Pareto distribution because the assets at hazard follow a Zipf distribution, which can be thought of as a Pareto distribution with exponent 1. The Z-CAT model is an idealized hurricane catastrophe model that represents a coastline where populated places with Zipf- distributed assets are randomly scattered and damaged by virtual hurricanes with sizes and intensities generated through a Monte-Carlo process. Results produce realistic Pareto exponents. The ability of the Z-CAT model to simulate different climate scenarios allowed testing of sensitivities to Maximum Potential Intensity, landfall rates and building structure vulnerability. The Z-CAT model results demonstrate that a statistical significant difference in damage is found when only changes in the parameters create a doubling of damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size of any organism is influenced by the surrounding ecological conditions. In this study, we investigate the effects of such factors on the size spectra of planktic foraminiferal assemblages from Holocene surface sediments. We analyzed assemblages from 69 Holocene samples, which cover the major physical and chemical gradients of the oceans. On a global scale, the range of sizes in assemblages triples from the poles to the tropics. This general temperature-related size increase is interrupted by smaller sizes at temperatures characteristic of the polar and subtropical fronts, at 2°C and 17°C, respectively, as well as in upwelling areas. On a regional scale, surface water stratification, seasonality and primary productivity are highly correlated with the size patterns. Such environmentally controlled size changes are not only characteristic for entire assemblage, but also for the dominant single species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large scale wind power generation complicated with restrictions on the tie line plans may lead to significant wind power curtailment and deep cycling of coal units during the valley load periods. This study proposes a dispatch strategy for interconnected wind-coal intensive power systems (WCISs). Wind power curtailment and cycling of coal units are included in the economic dispatch analysis of regional systems. Based on the day-ahead dispatch results, a tie line power plan adjustment strategy is implemented in the event of wind power curtailment or deep cycling occurring in the economic dispatch model, with the objective of reducing such effects. The dispatch strategy is designed based on the distinctive operation characteristics of interconnected WCISs, and dispatch results for regional systems in China show that the proposed strategy is feasible and can improve the overall system operation performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A target irradiated with a high power laser pulse, blows off a large amount of charge and as a consequence the target itself becomes a generator of electromagnetic pulses (EMP) owing to high return current flowing to the ground through the target holder. The first measurement of the magnetic field induced by the neutralizing current reaching a value of a few kA was performed with the use of an inductive target probe at the PALS Laser Facility (Cikhardt et al. Rev. Sci. Instrum. 85 (2014) 103507). A full description of EMP generation should contain information on the spatial distribution and temporal variation of the electromagnetic field inside and outside of the interaction chamber. For this reason, we consider the interaction chamber as a resonant cavity in which different modes of EMP oscillate for hundreds of nanoseconds, until the EMP is transmitted outside through the glass windows and EM waves are attenuated. Since the experimental determination of the electromagnetic field distribution is limited by the number of employed antennas, a mapping of the electromagnetic field has to be integrated with numerical simulations. Thus, this work reports on a detailed numerical mapping of the electromagnetic field inside the interaction chamber at the PALS Laser Facility (covering a frequency spectrum from 100 MHz to 3 GHz) using the commercial code COMSOL Multiphysics 5.2. Moreover we carried out a comparison of the EMP generated in the parallelepiped-like interaction chamber used in the Vulcan Petawatt Laser Facility at the Rutherford Appleton Laboratory, against that produced in the spherical interaction chamber of PALS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As one of the most successfully commercialized distributed energy resources, the long-term effects of microturbines (MTs) on the distribution network has not been fully investigated due to the complex thermo-fluid-mechanical energy conversion processes. This is further complicated by the fact that the parameter and internal data of MTs are not always available to the electric utility, due to different ownerships and confidentiality concerns. To address this issue, a general modeling approach for MTs is proposed in this paper, which allows for the long-term simulation of the distribution network with multiple MTs. First, the feasibility of deriving a simplified MT model for long-term dynamic analysis of the distribution network is discussed, based on the physical understanding of dynamic processes that occurred within MTs. Then a three-stage identification method is developed in order to obtain a piecewise MT model and predict electro-mechanical system behaviors with saturation. Next, assisted with the electric power flow calculation tool, a fast simulation methodology is proposed to evaluate the long-term impact of multiple MTs on the distribution network. Finally, the model is verified by using Capstone C30 microturbine experiments, and further applied to the dynamic simulation of a modified IEEE 37-node test feeder with promising results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans cette étude, différentes configurations d’un poêle à bois sont comparées afin de caractériser les principaux facteurs influençant la formation de particules et leur rejet. Ainsi, les paramètres de conception d’un poêle à bois seront étudiés expérimentalement et à l’aide de la dynamique des fluides numérique pour découvrir les lignes directrices en ce qui concerne la réduction de l’émission de particules. La méthode expérimentale utilisée est une adaptation de celle utilisée par l’industrie pour la certification de poêle à bois développée par EPA/CSA. Afin de mieux décrire le comportement d’un concept de poêle à bois donné, à un taux de combustion donné, l’étude abordera également : la distribution de température interne, la composition des émissions gazeuses, le ratio air carburant (), l’efficacité globale du poêle, l’humidité et la distribution d’air à l’intérieur de la chambre de combustion. Le suivi de ces paramètres a permis de révéler leur influence significative sur les émissions en considérant les modifications et les conditions d’opération du poêle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vector-borne disease emergence in recent decades has been associated with different environmental drivers including changes in habitat, hosts and climate. Lyme borreliosis is among the most important vector-borne diseases in the Northern hemisphere and is an emerging disease in Scotland. Transmitted by Ixodid tick vectors between large numbers of wild vertebrate host species, Lyme borreliosis is caused by bacteria from the Borrelia burgdorferi sensu lato species group. Ecological studies can inform how environmental factors such as host abundance and community composition, habitat and landscape heterogeneity contribute to spatial and temporal variation in risk from B. burgdorferi s.l. In this thesis a range of approaches were used to investigate the effects of vertebrate host communities and individual host species as drivers of B. burgdorferi s.l. dynamics and its tick vector Ixodes ricinus. Host species differ in reservoir competence for B. burgdorferi s.l. and as hosts for ticks. Deer are incompetent transmission hosts for B. burgdorferi s.l. but are significant hosts of all life-stages of I. ricinus. Rodents and birds are important transmission hosts of B. burgdorferi s.l. and common hosts of immature life-stages of I. ricinus. In this thesis, surveys of woodland sites revealed variable effects of deer density on B. burgdorferi prevalence, from no effect (Chapter 2) to a possible ‘dilution’ effect resulting in lower prevalence at higher deer densities (Chapter 3). An invasive species in Scotland, the grey squirrel (Sciurus carolinensis), was found to host diverse genotypes of B. burgdorferi s.l. and may act as a spill-over host for strains maintained by native host species (Chapter 4). Habitat fragmentation may alter the dynamics of B. burgdorferi s.l. via effects on the host community and host movements. In this thesis, there was lack of persistence of the rodent associated genospecies of B. burgdorferi s.l. within a naturally fragmented landscape (Chapter 3). Rodent host biology, particularly population cycles and dispersal ability are likely to affect pathogen persistence and recolonization in fragmented habitats. Heterogeneity in disease dynamics can occur spatially and temporally due to differences in the host community, habitat and climatic factors. Higher numbers of I. ricinus nymphs, and a higher probability of detecting a nymph infected with B. burgdorferi s.l., were found in areas with warmer climates estimated by growing degree days (Chapter 2). The ground vegetation type associated with the highest number of I. ricinus nymphs varied between studies in this thesis (Chapter 2 & 3) and does not appear to be a reliable predictor across large areas. B. burgdorferi s.l. prevalence and genospecies composition was highly variable for the same sites sampled in subsequent years (Chapter 2). This suggests that dynamic variables such as reservoir host densities and deer should be measured as well as more static habitat and climatic factors to understand the drivers of B. burgdorferi s.l. infection in ticks. Heterogeneity in parasite loads amongst hosts is a common finding which has implications for disease ecology and management. Using a 17-year data set for tick infestations in a wild bird community in Scotland, different effects of age and sex on tick burdens were found among four species of passerine bird (Chapter 5). There were also different rates of decline in tick burdens among bird species in response to a long term decrease in questing tick pressure over the study. Species specific patterns may be driven by differences in behaviour and immunity and highlight the importance of comparative approaches. Combining whole genome sequencing (WGS) and population genetics approaches offers a novel approach to identify ecological drivers of pathogen populations. An initial analysis of WGS from B. burgdorferi s.s. isolates sampled 16 years apart suggests that there is a signal of measurable evolution (Chapter 6). This suggests demographic analyses may be applied to understand ecological and evolutionary processes of these bacteria. This work shows how host communities, habitat and climatic factors can affect the local transmission dynamics of B. burgdorferi s.l. and the potential risk of infection to humans. Spatial and temporal heterogeneity in pathogen dynamics poses challenges for the prediction of risk. New tools such as WGS of the pathogen (Chapter 6) and blood meal analysis techniques will add power to future studies on the ecology and evolution of B. burgdorferi s.l.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal characterizations of high power light emitting diodes (LEDs) and laser diodes (LDs) are one of the most critical issues to achieve optimal performance such as center wavelength, spectrum, power efficiency, and reliability. Unique electrical/optical/thermal characterizations are proposed to analyze the complex thermal issues of high power LEDs and LDs. First, an advanced inverse approach, based on the transient junction temperature behavior, is proposed and implemented to quantify the resistance of the die-attach thermal interface (DTI) in high power LEDs. A hybrid analytical/numerical model is utilized to determine an approximate transient junction temperature behavior, which is governed predominantly by the resistance of the DTI. Then, an accurate value of the resistance of the DTI is determined inversely from the experimental data over the predetermined transient time domain using numerical modeling. Secondly, the effect of junction temperature on heat dissipation of high power LEDs is investigated. The theoretical aspect of junction temperature dependency of two major parameters – the forward voltage and the radiant flux – on heat dissipation is reviewed. Actual measurements of the heat dissipation over a wide range of junction temperatures are followed to quantify the effect of the parameters using commercially available LEDs. An empirical model of heat dissipation is proposed for applications in practice. Finally, a hybrid experimental/numerical method is proposed to predict the junction temperature distribution of a high power LD bar. A commercial water-cooled LD bar is used to present the proposed method. A unique experimental setup is developed and implemented to measure the average junction temperatures of the LD bar. After measuring the heat dissipation of the LD bar, the effective heat transfer coefficient of the cooling system is determined inversely. The characterized properties are used to predict the junction temperature distribution over the LD bar under high operating currents. The results are presented in conjunction with the wall-plug efficiency and the center wavelength shift.