894 resultados para Average Power Ratio


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A DSP implementation of Space Vector PWM (SVPWM) using constant V/Hz control for the open winding doubly-fed generator is proposed. This control of SVPWM modulation mode and open winding structure combination has the high voltage utilization ratio, greatly improves the control precision of the system, and reduces the stator winding output current distortion rate, though the complexity of the system is increased. This paper describes the basic principle of SVPWM and discusses the particularity of SVPWM waveform generated by hybrid vector under the condition of open winding. This method is applied to a state of doubly-fed wind power generator. The experimental verification shows that this control method can make the output voltage amplitude of the doubly-fed induction generator be 380V and the frequency be 50Hz by using of TMS32028335 chip based on constant V/Hz control of symmetric SVPWM modulation wave.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this work is to determine the true cost incurred by the Republic of Ireland and Northern Ireland in order to meet their EU renewable electricity targets. The primary all-island of Ireland policy goal is that 40% of electricity will come from renewable sources in 2020. From this it is expected that wind generation on the Irish electricity system will be in the region of 32-37% of total generation. This leads to issues resulting from wind energy being a non-synchronous, unpredictable and variable source of energy use on a scale never seen before for a single synchronous system. If changes are not made to traditional operational practices, the efficient running of the electricity system will be directly affected by these issues in the coming years. Using models of the electricity system for the all-island grid of Ireland, the effects of high wind energy penetration expected to be present in 2020 are examined. These models were developed using a unit commitment, economic dispatch tool called PLEXOS which allows for a detailed representation of the electricity system to be achieved down to individual generator level. These models replicate the true running of the electricity system through use of day-ahead scheduling and semi-relaxed use of these schedules that reflects the Transmission System Operator's of real time decision making on dispatch. In addition, it carefully considers other non-wind priority dispatch generation technologies that have an effect on the overall system. In the models developed, three main issues associated with wind energy integration were selected to be examined in detail to determine the sensitivity of assumptions presented in other studies. These three issues include wind energy's non-synchronous nature, its variability and spatial correlation, and its unpredictability. This leads to an examination of the effects in three areas: the need for system operation constraints required for system security; different onshore to offshore ratios of installed wind energy; and the degrees of accuracy in wind energy forecasting. Each of these areas directly impact the way in which the electricity system is run as they address each of the three issues associated with wind energy stated above, respectively. It is shown that assumptions in these three areas have a large effect on the results in terms of total generation costs, wind curtailment and generator technology type dispatch. In particular accounting for these issues has resulted in wind curtailment being predicted in much larger quantities than had been previously reported. This would have a large effect on wind energy companies because it is already a very low profit margin industry. Results from this work have shown that the relaxation of system operation constraints is crucial to the economic running of the electricity system with large improvements shown in the reduction of wind curtailment and system generation costs. There are clear benefits in having a proportion of the wind installed offshore in Ireland which would help to reduce variability of wind energy generation on the system and therefore reduce wind curtailment. With envisaged future improvements in day-ahead wind forecasting from 8% to 4% mean absolute error, there are potential reductions in wind curtailment system costs and open cycle gas turbine usage. This work illustrates the consequences of assumptions in the areas of system operation constraints, onshore/offshore installed wind capacities and accuracy in wind forecasting to better inform the true costs associated with running Ireland's changing electricity system as it continues to decarbonise into the near future. This work also proposes to illustrate, through the use of Ireland as a case study, the effects that will become ever more prevalent in other synchronous systems as they pursue a path of increasing renewable energy generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon Capture and Storage (CCS) technologies provide a means to significantly reduce carbon emissions from the existing fleet of fossil-fired plants, and hence can facilitate a gradual transition from conventional to more sustainable sources of electric power. This is especially relevant for coal plants that have a CO2 emission rate that is roughly two times higher than that of natural gas plants. Of the different kinds of CCS technology available, post-combustion amine based CCS is the best developed and hence more suitable for retrofitting an existing coal plant. The high costs from operating CCS could be reduced by enabling flexible operation through amine storage or allowing partial capture of CO2 during high electricity prices. This flexibility is also found to improve the power plant’s ramp capability, enabling it to offset the intermittency of renewable power sources. This thesis proposes a solution to problems associated with two promising technologies for decarbonizing the electric power system: the high costs of the energy penalty of CCS, and the intermittency and non-dispatchability of wind power. It explores the economic and technical feasibility of a hybrid system consisting of a coal plant retrofitted with a post-combustion-amine based CCS system equipped with the option to perform partial capture or amine storage, and a co-located wind farm. A techno-economic assessment of the performance of the hybrid system is carried out both from the perspective of the stakeholders (utility owners, investors, etc.) as well as that of the power system operator.

In order to perform the assessment from the perspective of the facility owners (e.g., electric power utilities, independent power producers), an optimal design and operating strategy of the hybrid system is determined for both the amine storage and partial capture configurations. A linear optimization model is developed to determine the optimal component sizes for the hybrid system and capture rates while meeting constraints on annual average emission targets of CO2, and variability of the combined power output. Results indicate that there are economic benefits of flexible operation relative to conventional CCS, and demonstrate that the hybrid system could operate as an energy storage system: providing an effective pathway for wind power integration as well as a mechanism to mute the variability of intermittent wind power.

In order to assess the performance of the hybrid system from the perspective of the system operator, a modified Unit Commitment/ Economic Dispatch model is built to consider and represent the techno-economic aspects of operation of the hybrid system within a power grid. The hybrid system is found to be effective in helping the power system meet an average CO2 emissions limit equivalent to the CO2 emission rate of a state-of-the-art natural gas plant, and to reduce power system operation costs and number of instances and magnitude of energy and reserve scarcity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concentrations of Cd, Pb, Zn, Cu, Co, Ni, Fe, and Al203, water content, the amounts of organic carbon, the ratio of 13C/12C and the 14C-activity of the organic fraction were determined with sediment depth from a 34 cm long box-core from the Bornholm Basin (Baltic Sea). The average sedimentation rate was 2.4 mm/yr. The upper portion of the core contained increasing amounts of 14C-inactive organic carbon, and above 3 cm depth, man-made 14C from atomic bomb tests. The concentrations of the heavy metals Cd, Pb, Zn, and Cu increase strongly towards the surface, while other metals, as Fe, Ni and Co remain almost unchanged. This phenomenon is attributed to anthropogenic influences. A comparison of the Kieler Bucht, the Bornholm and the Gotland Basins shows that today the anthropogenic addition of Zn is about 100 mg/m**2 yr in all three basins. The beginning of this excess of Zn, however, is delayed by about 20 years in, the Bornholm Basin and by about 40 years in the Gotland Basin. It is suggested that SW-NE transport of these anthropogenically mobilized metals may be related to periodic bottom water renewal in the Baltic Sea sedimentary basins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of the stable isotopic composition (dD(H2) or dD) of atmospheric molecular hydrogen (H2) are a useful addition to mixing ratio (X(H2)) measurements for understanding the atmospheric H2 cycle. dD datasets published so far consist mostly of observations at background locations. We complement these with observations from the Cabauw tall tower at the CESAR site, situated in a densely populated region of the Netherlands. Our measurements show a large anthropogenic influence on the local H2 cycle, with frequently occurring pollution events that are characterized by X(H2) values that reach up to 1 ppm and low dD values. An isotopic source signature analysis yields an apparent source signature below -400 per mil, which is much more D-depleted than the fossil fuel combustion source signature commonly used in H2 budget studies. Two diurnal cycles that were sampled at a suburban site near London also show a more D-depleted source signature (-340 per mil), though not as extremely depleted as at Cabauw. The source signature of the Northwest European vehicle fleet may have shifted to somewhat lower values due to changes in vehicle technology and driving conditions. Even so, the surprisingly depleted apparent source signature at Cabauw requires additional explanation; microbial H2 production seems the most likely cause. The Cabauw tower site also allowed us to sample vertical profiles. We found no decrease in (H2) at lower sampling levels (20 and 60m) with respect to higher sampling levels (120 and 200m). There was a significant shift to lower median dD values at the lower levels. This confirms the limited role of soil uptake around Cabauw, and again points to microbial H2 production during an extended growing season, as well as to possible differences in average fossil fuel combustion source signature between the different footprint areas of the sampling levels. So, although knowledge of the background cycle of H2 has improved over the last decade, surprising features come to light when a non-background location is studied, revealing remaining gaps in our understanding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, we consider wireless powered communication networks which could operate perpetually, as the base station (BS) broadcasts energy to the multiple energy harvesting (EH) information transmitters. These employ “harvest then transmit” mechanism, as they spend all of their energy harvested during the previous BS energy broadcast to transmit the information towards the BS. Assuming time division multiple access (TDMA), we propose a novel transmission scheme for jointly optimal allocation of the BS broadcasting power and time sharing among the wireless nodes, which maximizes the overall network throughput, under the constraint of average transmit power and maximum transmit power at the BS. The proposed scheme significantly outperforms “state of the art” schemes that employ only the optimal time allocation. If a single EH transmitter is considered, we generalize the optimal solutions for the case of fixed circuit power consumption, which refers to a much more practical scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a wirelessly powered wiretap channel, where an energy constrained multi-antenna information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. Based on a simple time-switching protocol where power transfer and information transmission are separated in time, we investigate two popular multi-antenna transmission schemes at the information source, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability and average secrecy rate for both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that by exploiting the full knowledge of channel state information (CSI), we can achieve a better secrecy performance, e.g., with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Moreover, we show that the additional level of randomness induced by wireless power transfer does not affect the secrecy performance in the high SNR regime. Finally, our theoretical claims are validated by the numerical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many countries wind energy has become an indispensable part of the electricity generation mix. The opportunity for ground based wind turbine systems are becoming more and more constrained due to limitations on turbine hub heights, blade lengths and location restrictions linked to environmental and permitting issues including special areas of conservation and social acceptance due to the visual and noise impacts. In the last decade there have been numerous proposals to harness high altitude winds, such as tethered kites, airfoils and dirigible based rotors. These technologies are designed to operate above the neutral atmospheric boundary layer of 1,300 m, which are subject to more powerful and persistent winds thus generating much higher electricity capacities. This paper presents an in-depth review of the state-of-the-art of high altitude wind power, evaluates the technical and economic viability of deploying high altitude wind power as a resource in Northern Ireland and identifies the optimal locations through considering wind data and geographical constraints. The key findings show that the total viable area over Northern Ireland for high altitude wind harnessing devices is 5109.6 km2, with an average wind power density of 1,998 W/m2 over a 20-year span, at a fixed altitude of 3,000 m. An initial budget for a 2MW pumping kite device indicated a total cost £1,751,402 thus proving to be economically viable with other conventional wind-harnessing devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the achievable sum rate and energy efficiency of zero-forcing precoded downlink massive multiple-input multiple-output systems in Ricean fading channels. A simple and accurate approximation of the average sum rate is presented, which is valid for a system with arbitrary rank channel means. Based on this expression, the optimal power allocation strategy maximizing the average sum rate is derived. Moreover, considering a general power consumption model, the energy efficiency of the system with rank-1 channel means is characterized. Specifically, the impact of key system parameters, such as the number of users N, the number of BS antennas M, Ricean factor K and the signal-to-noise ratio (SNR) ρ are studied, and closed-form expressions for the optimal ρ and M maximizing the energy efficiency are derived. Our findings show that the optimal power allocation scheme follows the water filling principle, and it can substantially enhance the average sum rate in the presence of strong line-of-sight effect in the low SNR regime. In addition, we demonstrate that the Ricean factor K has significant impact on the optimal values of M, N and ρ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the performance of dual-hop two-way amplify-and-forward (AF) relaying in the presence of inphase and quadrature-phase imbalance (IQI) at the relay node. In particular, the effective signal-to-interference-plus-noise ratio (SINR) at both sources is derived. These SINRs are used to design an instantaneous power allocation scheme, which maximizes the minimum SINR of the two sources under a total transmit power constraint. The solution to this optimization problem is analytically determined and used to evaluate the outage probability (OP) of the considered two-way AF relaying system. Both analytical and numerical results show that IQI can create fundamental performance limits on two-way relaying, which cannot be avoided by simply improving the channel conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current trends in the automotive industry have placed increased importance on engine downsizing for passenger vehicles. Engine downsizing often results in reduced power output and turbochargers have been relied upon to restore the power output and maintain drivability. As improved power output is required across a wide range of engine operating conditions, it is necessary for the turbocharger to operate effectively at both design and off-design conditions. One off-design condition of considerable importance for turbocharger turbines is low velocity ratio operation, which refers to the combination of high exhaust gas velocity and low turbine rotational speed. Conventional radial flow turbines are constrained to achieve peak efficiency at the relatively high velocity ratio of 0.7, due the requirement to maintain a zero inlet blade angle for structural reasons. Several methods exist to potentially shift turbine peak efficiency to lower velocity ratios. One method is to utilize a mixed flow turbine as an alternative to a radial flow turbine. In addition to radial and circumferential components, the flow entering a mixed flow turbine also has an axial component. This allows the flow to experience a non-zero inlet blade angle, potentially shifting peak efficiency to a lower velocity ratio when compared to an equivalent radial flow turbine.
This study examined the effects of varying the flow conditions at the inlet to a mixed flow turbine and evaluated the subsequent impact on performance. The primary parameters examined were average inlet flow angle, the spanwise distribution of flow angle across the inlet and inlet flow cone angle. The results have indicated that the inlet flow angle significantly influenced the degree of reaction across the rotor and the turbine efficiency. The rotor studied was a custom in-house design based on a state-of-the-art radial flow turbine design. A numerical approach was used as the basis for this investigation and the numerical model has been validated against experimental data obtained from the cold flow turbine test rig at Queen’s University Belfast. The results of the study have provided a useful insight into how the flow conditions at rotor inlet influence the performance of a mixed flow turbine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose was to determine running economy and lactate threshold among a selection of male elite football players with high and low aerobic power. Forty male elite football players from the highest Swedish division (“Allsvenskan”) participated in the study. In a test of running economy (RE) and blood lactate accumulation the participants ran four minutes each at 10, 12, 14, and 16 km•h-1 at horizontal level with one minute rest in between each four minutes interval. After the last sub-maximal speed level the participants got two minutes of rest before test of maximal oxygen uptake (VO2max). Players that had a maximal oxygen uptake lower than the average for the total population of 57.0 mL O2•kg-1•minute-1 were assigned to the low aerobic power group (LAP) (n=17). The players that had a VO2max equal to or higher than 57.0 mL O2•kg-1•minute-1 were selected for the high aerobic power group (HAP) (n=23). The VO2max was significantly different between the HAP and LAP group. The average RE, measured as oxygen uptake at 12, 14 and 16km•h-1 was significantly lower but the blood lactate concentration was significantly higher at 14 and 16 km•h-1 for theLAP group compared with the HAP group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação é composta por 5 artigos.