19 resultados para Methodology for Collecting, Estimating, and Organizing Microeconomic Data
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
This paper presents a current and turbulence measurement campaign conducted at a test site in an energetic tidal channel known as Strangford Narrows, Northern Ireland. The data was collected as part of the MaRINET project funded by the EU under their FP7 framework. It was a collaborative effort between Queen’s University Belfast, SCHOTTEL and Fraunhofer IWES. The site is highly turbulent with a strong shear flow. Longer term measurements of the flow regime were made using a bottom mounted Acoustic Doppler Profiler (ADP). During a specific turbulence measurement campaign, two collocated in- struments were used to measure incoming flow characteristics: an ADP (Aquadopp, Nortek) and a turbulence profiler (MicroRider, Rockland Scientific International). The instruments recorded the same incoming flow, so that direct comparisons between the data can be made. In this study the methodology adopted to deploy the instruments is presented. The resulting turbulence measurements using the different types of instrumentation are compared and the usefulness of each instrument for the relevant range of applications is discussed. The paper shows the ranges of the frequency spectra obtained using the different instruments, with the combined measurements providing insight into the structure of the turbulence across a wide range of scales.
Resumo:
A novel phosphoramidite; N,N-diisopropylamino-2-cyanoethyl-ortho-methylbenzylphosphoramidite 1, was prepared. The reaction of 1 with DMTrT and subsequent derivatisation of the phosphite triester product under solution-phase, Michaelis–Arbuzov conditions was investigated. Coupling of 1 with the terminal hydroxyl groups of support-bound oligodeoxyribonucleotides and subsequent reaction with an activated disulfide yielded oligonucleotides bearing a terminal, phosphorothiolate-linked, lipophilic moiety. The oligomers were readily purified using RP-HPLC. Silver(I)-mediated cleavage of the phosphorothiolate linkage and desalting of the oligonucleotides were performed readily in one step to yield cleanly the corresponding phosphate monester-terminated oligomers.
Resumo:
This study investigates face recognition with partial occlusion, illumination variation and their combination, assuming no prior information about the mismatch, and limited training data for each person. The authors extend their previous posterior union model (PUM) to give a new method capable of dealing with all these problems. PUM is an approach for selecting the optimal local image features for recognition to improve robustness to partial occlusion. The extension is in two stages. First, authors extend PUM from a probability-based formulation to a similarity-based formulation, so that it operates with as little as one single training sample to offer robustness to partial occlusion. Second, they extend this new formulation to make it robust to illumination variation, and to combined illumination variation and partial occlusion, by a novel combination of multicondition relighting and optimal feature selection. To evaluate the new methods, a number of databases with various simulated and realistic occlusion/illumination mismatches have been used. The results have demonstrated the improved robustness of the new methods.
Resumo:
Dynamic power consumption is very dependent on interconnect, so clever mapping of digital signal processing algorithms to parallelised realisations with data locality is vital. This is a particular problem for fast algorithm implementations where typically, designers will have sacrificed circuit structure for efficiency in software implementation. This study outlines an approach for reducing the dynamic power consumption of a class of fast algorithms by minimising the index space separation; this allows the generation of field programmable gate array (FPGA) implementations with reduced power consumption. It is shown how a 50% reduction in relative index space separation results in a measured power gain of 36 and 37% over a Cooley-Tukey Fast Fourier Transform (FFT)-based solution for both actual power measurements for a Xilinx Virtex-II FPGA implementation and circuit measurements for a Xilinx Virtex-5 implementation. The authors show the generality of the approach by applying it to a number of other fast algorithms namely the discrete cosine, the discrete Hartley and the Walsh-Hadamard transforms.
Resumo:
There is substantial international variation in human papillomavirus (HPV) prevalence; this study details the first report from Northern Ireland and additionally provides a systematic review and meta-analysis pooling the prevalence of high-risk (HR-HPV) subtypes among women with normal cytology in the UK and Ireland. Between February and December 2009, routine liquid based cytology (LBC) samples were collected for HPV detection (Roche Cobas® 4800 [PCR]) among unselected women attending for cervical cytology testing. Four electronic databases, including MEDLINE, were then searched from their inception till April 2011. A random effects meta-analysis was used to calculate a pooled HR-HPV prevalence and associated 95% confidence intervals (CI). 5,712 women, mean age 39 years (±SD 11.9 years; range 20-64 years), were included in the analysis, of which 5,068 (88.7%), 417 (7.3%) and 72 (1.3%) had normal, low, and high-grade cytological findings, respectively. Crude HR-HPV prevalence was 13.2% (95% CI, 12.7-13.7) among women with normal cytology and increased with cytological grade. In meta-analysis the pooled HR-HPV prevalence among those with normal cytology was 0.12 (95% CIs, 0.10-0.14; 21 studies) with the highest prevalence in younger women. HPV 16 and HPV 18 specific estimates were 0.03 (95% CI, 0.02-0.05) and 0.01 (95% CI, 0.01-0.02), respectively. The findings of this Northern Ireland study and meta-analysis verify the prevalent nature of HPV infection among younger women. Reporting of the type-specific prevalence of HPV infection is relevant for evaluating the impact of future HPV immunization initiatives, particularly against HR-HPV types other than HPV 16 and 18. J. Med. Virol. 85:295-308, 2013. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
Resumo:
High-quality data from appropriate archives are needed for the continuing improvement of radiocarbon calibration curves. We discuss here the basic assumptions behind 14C dating that necessitate calibration and the relative strengths and weaknesses of archives from which calibration data are obtained. We also highlight the procedures, problems and uncertainties involved in determining atmospheric and surface ocean 14C/12C in these archives, including a discussion of the various methods used to derive an independent absolute timescale and uncertainty. The types of data required for the current IntCal database and calibration curve model are tabulated with examples.
Resumo:
In highly heterogeneous aquifer systems, conceptualization of regional groundwater flow models frequently results in the generalization or negligence of aquifer heterogeneities, both of which may result in erroneous model outputs. The calculation of equivalence related to hydrogeological parameters and applied to upscaling provides a means of accounting for measurement scale information but at regional scale. In this study, the Permo-Triassic Lagan Valley strategic aquifer in Northern Ireland is observed to be heterogeneous, if not discontinuous, due to subvertical trending low-permeability Tertiary dolerite dykes. Interpretation of ground and aerial magnetic surveys produces a deterministic solution to dyke locations. By measuring relative permeabilities of both the dykes and the sedimentary host rock, equivalent directional permeabilities, that determine anisotropy calculated as a function of dyke density, are obtained. This provides parameters for larger scale equivalent blocks, which can be directly imported to numerical groundwater flow models. Different conceptual models with different degrees of upscaling are numerically tested and results compared to regional flow observations. Simulation results show that the upscaled permeabilities from geophysical data allow one to properly account for the observed spatial variations of groundwater flow, without requiring artificial distribution of aquifer properties. It is also found that an intermediate degree of upscaling, between accounting for mapped field-scale dykes and accounting for one regional anisotropy value (maximum upscaling) provides results the closest to the observations at the regional scale.
Resumo:
Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor upsizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and output quality. In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the less important computations are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings. © 2006 IEEE.
Resumo:
Recent technological advances have increased the quantity of movement data being recorded. While valuable knowledge can be gained by analysing such data, its sheer volume creates challenges. Geovisual analytics, which helps the human cognition process by using tools to reason about data, offers powerful techniques to resolve these challenges. This paper introduces such a geovisual analytics environment for exploring movement trajectories, which provides visualisation interfaces, based on the classic space-time cube. Additionally, a new approach, using the mathematical description of motion within a space-time cube, is used to determine the similarity of trajectories and forms the basis for clustering them. These techniques were used to analyse pedestrian movement. The results reveal interesting and useful spatiotemporal patterns and clusters of pedestrians exhibiting similar behaviour.
Resumo:
This paper presents a modeling and optimization approach for sensor placement in a building zone that supports reliable environment monitoring. © 2012 ACM.