880 resultados para Functional Requirements for Authority Data (FRAD)
Resumo:
There has been an increased use of the Doubly-Fed Induction Machine (DFIM) in ac drive applications in recent times, particularly in the field of renewable energy systems and other high power variable-speed drives. The DFIM is widely regarded as the optimal generation system for both onshore and offshore wind turbines and has also been considered in wave power applications. Wind power generation is the most mature renewable technology. However, wave energy has attracted a large interest recently as the potential for power extraction is very significant. Various wave energy converter (WEC) technologies currently exist with the oscillating water column (OWC) type converter being one of the most advanced. There are fundemental differences in the power profile of the pneumatic power supplied by the OWC WEC and that of a wind turbine and this causes significant challenges in the selection and rating of electrical generators for the OWC devises. The thesis initially aims to provide an accurate per-phase equivalent circuit model of the DFIM by investigating various characterisation testing procedures. Novel testing methodologies based on the series-coupling tests is employed and is found to provide a more accurate representation of the DFIM than the standard IEEE testing methods because the series-coupling tests provide a direct method of determining the equivalent-circuit resistances and inductances of the machine. A second novel method known as the extended short-circuit test is also presented and investigated as an alternative characterisation method. Experimental results on a 1.1 kW DFIM and a 30 kW DFIM utilising the various characterisation procedures are presented in the thesis. The various test methods are analysed and validated through comparison of model predictions and torque-versus-speed curves for each induction machine. Sensitivity analysis is also used as a means of quantifying the effect of experimental error on the results taken from each of the testing procedures and is used to determine the suitability of the test procedures for characterising each of the devices. The series-coupling differential test is demonstrated to be the optimum test. The research then focuses on the OWC WEC and the modelling of this device. A software model is implemented based on data obtained from a scaled prototype device situated at the Irish test site. Test data from the electrical system of the device is analysed and this data is used to develop a performance curve for the air turbine utilised in the WEC. This performance curve was applied in a software model to represent the turbine in the electro-mechanical system and the software results are validated by the measured electrical output data from the prototype test device. Finally, once both the DFIM and OWC WEC power take-off system have been modeled succesfully, an investigation of the application of the DFIM to the OWC WEC model is carried out to determine the electrical machine rating required for the pulsating power derived from OWC WEC device. Thermal analysis of a 30 kW induction machine is carried out using a first-order thermal model. The simulations quantify the limits of operation of the machine and enable thedevelopment of rating requirements for the electrical generation system of the OWC WEC. The thesis can be considered to have three sections. The first section of the thesis contains Chapters 2 and 3 and focuses on the accurate characterisation of the doubly-fed induction machine using various testing procedures. The second section, containing Chapter 4, concentrates on the modelling of the OWC WEC power-takeoff with particular focus on the Wells turbine. Validation of this model is carried out through comparision of simulations and experimental measurements. The third section of the thesis utilises the OWC WEC model from Chapter 4 with a 30 kW induction machine model to determine the optimum device rating for the specified machine. Simulations are carried out to perform thermal analysis of the machine to give a general insight into electrical machine rating for an OWC WEC device.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Resumo:
Copper dimethylamino-2-propoxide [Cu(dmap)2] is used as a precursor for low-temperature atomic layer deposition (ALD) of copper thin films. Chemisorption of the precursor is the necessary first step of ALD, but it is not known in this case whether there is selectivity for adsorption sites, defects, or islands on the substrate. Therefore, we study the adsorption of the Cu(dmap)2 molecule on the different sites on flat and rough Cu surfaces using PBE, PBE-D3, optB88-vdW, and vdW-DF2 methods. We found the relative order of adsorption energies for Cu(dmap)2 on Cu surfaces is Eads (PBE-D3) > Eads (optB88-vdW) > Eads (vdW-DF2) > Eads (PBE). The PBE and vdW-DF2 methods predict one chemisorption structure, while optB88-vdW predicts three chemisorption structures for Cu(dmap)2 adsorption among four possible adsorption configurations, whereas PBE-D3 predicts a chemisorbed structure for all the adsorption sites on Cu(111). All the methods with and without van der Waals corrections yield a chemisorbed molecule on the Cu(332) step and Cu(643) kink because of less steric hindrance on the vicinal surfaces. Strong distortion of the molecule and significant elongation of Cu–N bonds are predicted in the chemisorbed structures, indicating that the ligand–Cu bonds break during the ALD of Cu from Cu(dmap)2. The molecule loses its initial square-planar structure and gains linear O–Cu–O bonding as these atoms attach to the surface. As a result, the ligands become unstable and the precursor becomes more reactive to the coreagent. Charge redistribution mainly occurs between the adsorbate O–Cu–O bond and the surface. Bader charge analysis shows that electrons are donated from the surface to the molecule in the chemisorbed structures, so that the Cu center in the molecule is partially reduced.
Resumo:
Limb, trunk, and body weight measurements were obtained for growth series of Milne-Edwards's diademed sifaka, Propithecus diadema edwardsi, and the golden-crowned sifaka, Propithecus tattersalli. Similar measures were obtained also for primarily adults of two subspecies of the western sifaka: Propithecus verreauxi coquereli, Coquerel's sifaka, and Propithecus verreauxi verreauxi, Verreaux's sifaka. Ontogenetic series for the larger-bodied P. d. edwardsi and the smaller-bodied P. tattersalli were compared to evaluate whether species-level differences in body proportions result from the differential extension of common patterns of relative growth. In bivariate plots, both subspecies of P. verreauxi were included to examine whether these taxa also lie along a growth trajectory common to all sifakas. Analyses of the data indicate that postcranial proportions for sifakas are ontogenetically scaled, much as demonstrated previously with cranial dimensions for all three species (Ravosa, 1992). As such, P. d. edwardsi apparently develops larger overall size primarily by growing at a faster rate, but not for a longer duration of time, than P. tattersalli and P. verreauxi; this is similar to results based on cranial data. A consideration of Malagasy lemur ecology suggests that regional differences in forage quality and resource availability have strongly influenced the evolutionary development of body-size variation in sifakas. On one hand, the rainforest environment of P. d. edwardsi imposes greater selective pressures for larger body size than the dry-forest environment of P. tattersalli and P. v. coquereli, or the semi-arid climate of P. v. verreauxi. On the other hand, as progressively smaller-bodied adult sifakas are located in the east, west, and northwest, this apparently supports suggestions that adult body size is set by dry-season constraints on food quality and distribution (i.e., smaller taxa are located in more seasonal habitats such as the west and northeast). Moreover, the fact that body-size differentiation occurs primarily via differences in growth rate is also due apparently to differences in resource seasonality (and juvenile mortality risk in turn) between the eastern rainforest and the more temperate northeast and west. Most scaling coefficients for both arm and leg growth range from slight negative allometry to slight positive allometry. Given the low intermembral index for sifakas, which is also an adaptation for propulsive hindlimb-dominated jumping, this suggests that differences in adult limb proportions are largely set prenatally rather than being achieved via higher rates of postnatal hindlimb growth.(ABSTRACT TRUNCATED AT 400 WORDS)
Resumo:
The dorsomedial prefrontal cortex (DMPFC) plays a central role in aspects of cognitive control and decision making. Here, we provide evidence for an anterior-to-posterior topography within the DMPFC using tasks that evoke three distinct forms of control demands--response, decision, and strategic--each of which could be mapped onto independent behavioral data. Specifically, we identify three spatially distinct regions within the DMPFC: a posterior region associated with control demands evoked by multiple incompatible responses, a middle region associated with control demands evoked by the relative desirability of decision options, and an anterior region that predicts control demands related to deviations from an individual's preferred decision-making strategy. These results provide new insight into the functional organization of DMPFC and suggest how recent controversies about its role in complex decision making and response mapping can be reconciled.
Resumo:
BACKGROUND: Genetic association studies are conducted to discover genetic loci that contribute to an inherited trait, identify the variants behind these associations and ascertain their functional role in determining the phenotype. To date, functional annotations of the genetic variants have rarely played more than an indirect role in assessing evidence for association. Here, we demonstrate how these data can be systematically integrated into an association study's analysis plan. RESULTS: We developed a Bayesian statistical model for the prior probability of phenotype-genotype association that incorporates data from past association studies and publicly available functional annotation data regarding the susceptibility variants under study. The model takes the form of a binary regression of association status on a set of annotation variables whose coefficients were estimated through an analysis of associated SNPs in the GWAS Catalog (GC). The functional predictors examined included measures that have been demonstrated to correlate with the association status of SNPs in the GC and some whose utility in this regard is speculative: summaries of the UCSC Human Genome Browser ENCODE super-track data, dbSNP function class, sequence conservation summaries, proximity to genomic variants in the Database of Genomic Variants and known regulatory elements in the Open Regulatory Annotation database, PolyPhen-2 probabilities and RegulomeDB categories. Because we expected that only a fraction of the annotations would contribute to predicting association, we employed a penalized likelihood method to reduce the impact of non-informative predictors and evaluated the model's ability to predict GC SNPs not used to construct the model. We show that the functional data alone are predictive of a SNP's presence in the GC. Further, using data from a genome-wide study of ovarian cancer, we demonstrate that their use as prior data when testing for association is practical at the genome-wide scale and improves power to detect associations. CONCLUSIONS: We show how diverse functional annotations can be efficiently combined to create 'functional signatures' that predict the a priori odds of a variant's association to a trait and how these signatures can be integrated into a standard genome-wide-scale association analysis, resulting in improved power to detect truly associated variants.
Resumo:
Post-traumatic stress disorder (PTSD) affects regions that support autobiographical memory (AM) retrieval, such as the hippocampus, amygdala and ventral medial prefrontal cortex (PFC). However, it is not well understood how PTSD may impact the neural mechanisms of memory retrieval for the personal past. We used a generic cue method combined with parametric modulation analysis and functional MRI (fMRI) to investigate the neural mechanisms affected by PTSD symptoms during the retrieval of a large sample of emotionally intense AMs. There were three main results. First, the PTSD group showed greater recruitment of the amygdala/hippocampus during the construction of negative versus positive emotionally intense AMs, when compared to controls. Second, across both the construction and elaboration phases of retrieval the PTSD group showed greater recruitment of the ventral medial PFC for negatively intense memories, but less recruitment for positively intense memories. Third, the PTSD group showed greater functional coupling between the ventral medial PFC and the amygdala for negatively intense memories, but less coupling for positively intense memories. In sum, the fMRI data suggest that there was greater recruitment and coupling of emotional brain regions during the retrieval of negatively intense AMs in the PTSD group when compared to controls.
Resumo:
Although it is known that brain regions in one hemisphere may interact very closely with their corresponding contralateral regions (collaboration) or operate relatively independent of them (segregation), the specific brain regions (where) and conditions (how) associated with collaboration or segregation are largely unknown. We investigated these issues using a split field-matching task in which participants matched the meaning of words or the visual features of faces presented to the same (unilateral) or to different (bilateral) visual fields. Matching difficulty was manipulated by varying the semantic similarity of words or the visual similarity of faces. We assessed the white matter using the fractional anisotropy (FA) measure provided by diffusion tensor imaging (DTI) and cross-hemispheric communication in terms of fMRI-based connectivity between homotopic pairs of cortical regions. For both perceptual and semantic matching, bilateral trials became faster than unilateral trials as difficulty increased (bilateral processing advantage, BPA). The study yielded three novel findings. First, whereas FA in anterior corpus callosum (genu) correlated with word-matching BPA, FA in posterior corpus callosum (splenium-occipital) correlated with face-matching BPA. Second, as matching difficulty intensified, cross-hemispheric functional connectivity (CFC) increased in domain-general frontopolar cortex (for both word and face matching) but decreased in domain-specific ventral temporal lobe regions (temporal pole for word matching and fusiform gyrus for face matching). Last, a mediation analysis linking DTI and fMRI data showed that CFC mediated the effect of callosal FA on BPA. These findings clarify the mechanisms by which the hemispheres interact to perform complex cognitive tasks.
Resumo:
Centromeres are chromosomal loci essential for genome stability. Their malfunction can cause chromosome instability associated with cancer, infertility, and birth defects. This study focused on an intriguing centromere on human chromosome 17, which displays normal functional variation. Centromere identity can be found on either of two large arrays of repetitive DNA. We investigated inter-individual sequence variation on these two arrays and found association between array size, array variation, and centromere function. Our data suggest a functional influence of DNA sequence at this critical epigenetic locus.
Physical Activity, Central Adiposity, and Functional Limitations in Community-Dwelling Older Adults.
Resumo:
BACKGROUND AND PURPOSE: Obesity and physical inactivity are independently associated with physical and functional limitations in older adults. The current study examines the impact of physical activity on odds of physical and functional limitations in older adults with central and general obesity. METHODS: Data from 6279 community-dwelling adults aged 60 years or more from the Health and Retirement Study 2006 and 2008 waves were used to calculate prevalence and odds of physical and functional limitation among obese older adults with high waist circumference (waist circumference ≥88 cm in females and ≥102 cm in males) who were physically active versus inactive (engaging in moderate/vigorous activity less than once per week). Logistic regression models were adjusted for age, sex, race/ethnicity, education, smoking status, body mass index, and number of comorbidities. RESULTS: Physical activity was associated with lower odds of physical and functional limitations among older adults with high waist circumference (odds ratio [OR], 0.59; confidence interval [CI], 0.52-0.68, for physical limitations; OR, 0.52; CI, 0.44-0.62, for activities of daily living; and OR, 0.44; CI, 0.39-0.50, for instrumental activities of daily living). CONCLUSIONS: Physical activity is associated with significantly lower odds of physical and functional limitations in obese older adults regardless of how obesity is classified. Additional research is needed to determine whether physical activity moderates long-term physical and functional limitations.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.
Resumo:
MOTIVATION: Although many network inference algorithms have been presented in the bioinformatics literature, no suitable approach has been formulated for evaluating their effectiveness at recovering models of complex biological systems from limited data. To overcome this limitation, we propose an approach to evaluate network inference algorithms according to their ability to recover a complex functional network from biologically reasonable simulated data. RESULTS: We designed a simulator to generate data representing a complex biological system at multiple levels of organization: behaviour, neural anatomy, brain electrophysiology, and gene expression of songbirds. About 90% of the simulated variables are unregulated by other variables in the system and are included simply as distracters. We sampled the simulated data at intervals as one would sample from a biological system in practice, and then used the sampled data to evaluate the effectiveness of an algorithm we developed for functional network inference. We found that our algorithm is highly effective at recovering the functional network structure of the simulated system-including the irrelevance of unregulated variables-from sampled data alone. To assess the reproducibility of these results, we tested our inference algorithm on 50 separately simulated sets of data and it consistently recovered almost perfectly the complex functional network structure underlying the simulated data. To our knowledge, this is the first approach for evaluating the effectiveness of functional network inference algorithms at recovering models from limited data. Our simulation approach also enables researchers a priori to design experiments and data-collection protocols that are amenable to functional network inference.
Resumo:
When designing a new passenger ship or modifying an existing design, how do we ensure that the proposed design and crew emergency procedures are safe from an evacuation point of view? In the wake of major maritime disasters such as the Herald of Free Enterprise and the Estonia and in light of the growth in the numbers of high density, high-speed ferries and large capacity cruise ships, issues concerned with the evacuation of passengers and crew at sea are receiving renewed interest. In the maritime industry, ship evacuation models offer the promise to quickly and efficiently bring evacuation considerations into the design phase, while the ship is "on the drawing board". maritimeEXODUS-winner of the BCS, CITIS and RINA awards - is such a model. Features such as the ability to realistically simulate human response to fire, the capability to model human performance in heeled orientations, a virtual reality environment that produces realistic visualisations of the modelled scenarios and with an integrated abandonment model, make maritimeEXODUS a truly unique tool for assessing the evacuation capabilities of all types of vessels under a variety of conditions. This paper describes the maritimeEXODUS model, the SHEBA facility from which data concerning passenger/crew performance in conditions of heel is derived and an example application demonstrating the models use in performing an evacuation analysis for a large passenger ship partially based on the requirements of MSC circular 1033.
Resumo:
This paper presents a generic framework that can be used to describe study plans using meta-data. The context of this research and associated technologies and standards is presented. The approach adopted here has been developed within the mENU project that aims to provide a model for a European Networked University. The methodology for the design of the generic Framework is discussed and the main design requirements are presented. The approach adopted was based on a set of templates containing meta-data required for the description of programs of study and consisting of generic building elements annotated appropriately. The process followed to develop the templates is presented together with a set of evaluation criteria to test the suitability of the approach. The templates structure is presented and example templates are shown. A first evaluation of the approach has shown that the proposed framework can provide a flexible and competent means for the generic description of study plans for the purposes of a networked university.
Resumo:
This report concerns the development of the AASK V4.0 database (CAA Project 560/SRG/R+AD). AASK is the Aircraft Accident Statistics and Knowledge database, which is a repository of survivor accounts from aviation accidents. Its main purpose is to store observational and anecdotal data from interviews of the occupants involved in aircraft accidents. The AASK database has wide application to aviation safety analysis, being a source of factual data regarding the evacuation process. It is also key to the development of aircraft evacuation models such as airEXODUS, where insight into how people actually behave during evacuation from survivable aircraft crashes is required. With support from the UK CAA (Project 277/SRG/R&AD), AASK V3.0 was developed. This was an on-line prototype system available over the internet to selected users and included a significantly increased number of passenger accounts compared with earlier versions, the introduction of cabin crew accounts, the introduction of fatality information and improved functionality through the seat plan viewer utility. The most recently completed AASK project (Project 560/SRG/R+AD) involved four main components: a) analysis of the data collected in V3.0; b) continued collection and entry of data into AASK; c) maintenance and functional development of the AASK database; and d) user feedback survey. All four components have been pursued and completed in this two-year project. The current version developed in the last year of the project is referred to as AASK V4.0. This report provides summaries of the work done and the results obtained in relation to the project deliverables.