73 resultados para 100602 Input Output and Data Devices
Resumo:
Urbanization related alterations to the surface energy balance impact urban warming (‘heat islands’), the growth of the boundary layer, and many other biophysical processes. Traditionally, in situ heat flux measures have been used to quantify such processes, but these typically represent only a small local-scale area within the heterogeneous urban environment. For this reason, remote sensing approaches are very attractive for elucidating more spatially representative information. Here we use hyperspectral imagery from a new airborne sensor, the Operative Modular Imaging Spectrometer (OMIS), along with a survey map and meteorological data, to derive the land cover information and surface parameters required to map spatial variations in turbulent sensible heat flux (QH). The results from two spatially-explicit flux retrieval methods which use contrasting approaches and, to a large degree, different input data are compared for a central urban area of Shanghai, China: (1) the Local-scale Urban Meteorological Parameterization Scheme (LUMPS) and (2) an Aerodynamic Resistance Method (ARM). Sensible heat fluxes are determined at the full 6 m spatial resolution of the OMIS sensor, and at lower resolutions via pixel aggregation and spatial averaging. At the 6 m spatial resolution, the sensible heat flux of rooftop dominated pixels exceeds that of roads, water and vegetated areas, with values peaking at ∼ 350 W m− 2, whilst the storage heat flux is greatest for road dominated pixels (peaking at around 420 W m− 2). We investigate the use of both OMIS-derived land surface temperatures made using a Temperature–Emissivity Separation (TES) approach, and land surface temperatures estimated from air temperature measures. Sensible heat flux differences from the two approaches over the entire 2 × 2 km study area are less than 30 W m− 2, suggesting that methods employing either strategy maybe practica1 when operated using low spatial resolution (e.g. 1 km) data. Due to the differing methodologies, direct comparisons between results obtained with the LUMPS and ARM methods are most sensibly made at reduced spatial scales. At 30 m spatial resolution, both approaches produce similar results, with the smallest difference being less than 15 W m− 2 in mean QH averaged over the entire study area. This is encouraging given the differing architecture and data requirements of the LUMPS and ARM methods. Furthermore, in terms of mean study QH, the results obtained by averaging the original 6 m spatial resolution LUMPS-derived QH values to 30 and 90 m spatial resolution are within ∼ 5 W m− 2 of those derived from averaging the original surface parameter maps prior to input into LUMPS, suggesting that that use of much lower spatial resolution spaceborne imagery data, for example from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is likely to be a practical solution for heat flux determination in urban areas.
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
I consider the possibility that respondents to the Survey of Professional Forecasters round their probability forecasts of the event that real output will decline in the future, as well as their reported output growth probability distributions. I make various plausible assumptions about respondents’ rounding practices, and show how these impinge upon the apparent mismatch between probability forecasts of a decline in output and the probabilities of this event implied by the annual output growth histograms. I find that rounding accounts for about a quarter of the inconsistent pairs of forecasts.
Resumo:
The current study discusses new opportunities for secure ground to satellite communications using shaped femtosecond pulses that induce spatial hole burning in the atmosphere for efficient communications with data encoded within super-continua generated by femtosecond pulses. Refractive index variation across the different layers in the atmosphere may be modelled using assumptions that the upper strata of the atmosphere and troposphere behaving as layered composite amorphous dielectric networks composed of resistors and capacitors with different time constants across each layer. Input-output expressions of the dynamics of the networks in the frequency domain provide the transmission characteristics of the propagation medium. Femtosecond pulse shaping may be used to optimize the pulse phase-front and spectral composition across the different layers in the atmosphere. A generic procedure based on evolutionary algorithms to perform the pulse shaping is proposed. In contrast to alternative procedures that would require ab initio modelling and calculations of the propagation constant for the pulse through the atmosphere, the proposed approach is adaptive, compensating for refractive index variations along the column of air between the transmitter and receiver.
Resumo:
This paper describes the hydrochemistry of a lowland, urbanised river-system, The Cut in England, using in situ sub-daily sampling. The Cut receives effluent discharges from four major sewage treatment works serving around 190,000 people. These discharges consist largely of treated water, originally abstracted from the River Thames and returned via the water supply network, substantially increasing the natural flow. The hourly water quality data were supplemented by weekly manual sampling with laboratory analysis to check the hourly data and measure further determinands. Mean phosphorus and nitrate concentrations were very high, breaching standards set by EU legislation. Though 56% of the catchment area is agricultural, the hydrochemical dynamics were significantly impacted by effluent discharges which accounted for approximately 50% of the annual P catchment input loads and, on average, 59% of river flow at the monitoring point. Diurnal dissolved oxygen data demonstrated high in-stream productivity. From a comparison of high frequency and conventional monitoring data, it is inferred that much of the primary production was dominated by benthic algae, largely diatoms. Despite the high productivity and nutrient concentrations, the river water did not become anoxic and major phytoplankton blooms were not observed. The strong diurnal and annual variation observed showed that assessments of water quality made under the Water Framework Directive (WFD) are sensitive to the time and season of sampling. It is recommended that specific sampling time windows be specified for each determinand, and that WFD targets should be applied in combination to help identify periods of greatest ecological risk. This article is protected by copyright. All rights reserved.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
We analyze the interaction between university professors’ teaching quality and their research and administrative activities. Our sample is a high-quality individual panel data set from a medium size public Spanish university that allows us to avoid several types of biases frequently encountered in the literature. Although researchers teach roughly 20% more than non-researchers, their teaching quality is also 20% higher. Instructors with no research are 5 times more likely than the rest to be among the worst teachers. Over much of the relevant range, we find a nonlinear and positive relationship between research output and teaching quantity on teaching quality. Our conclusions may be useful for decision makers in universities and governments.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Purpose: To investigate the relationship between research data management (RDM) and data sharing in the formulation of RDM policies and development of practices in higher education institutions (HEIs). Design/methodology/approach: Two strands of work were undertaken sequentially: firstly, content analysis of 37 RDM policies from UK HEIs; secondly, two detailed case studies of institutions with different approaches to RDM based on semi-structured interviews with staff involved in the development of RDM policy and services. The data are interpreted using insights from Actor Network Theory. Findings: RDM policy formation and service development has created a complex set of networks within and beyond institutions involving different professional groups with widely varying priorities shaping activities. Data sharing is considered an important activity in the policies and services of HEIs studied, but its prominence can in most cases be attributed to the positions adopted by large research funders. Research limitations/implications: The case studies, as research based on qualitative data, cannot be assumed to be universally applicable but do illustrate a variety of issues and challenges experienced more generally, particularly in the UK. Practical implications: The research may help to inform development of policy and practice in RDM in HEIs and funder organisations. Originality/value: This paper makes an early contribution to the RDM literature on the specific topic of the relationship between RDM policy and services, and openness – a topic which to date has received limited attention.
Resumo:
This paper studies the impact of exogenous and endogenous shocks (exogenous shock is used interchangeably with external shock; endogenous shock is used interchangeably with domestic shock) on output fluctuations in post-communist countries during the 2000s. The first part presents the analytical framework and formulates a research hypothesis. The second part presents vector autoregressive estimation and analysis model proposed by Pesaran (2004) and Pesaran and Smith (2006) that relates bank real lending, the cyclical component of output and spreads and accounts for cross-sectional dependence (CD) across the countries. Impulse response functions show that exogenous positive shock lead to a drop in output sustainability for 9 over 12 Central Eastern European countries and Russia, when the endogenous shock is mild and ambiguous. Moreover, the effect of exogenous shock is more significant during the crises. Variance decompositions show that exogenous shock in the aftermath of crisis had a substantial impact on economic activity of emerging economies.
Resumo:
This paper argues that offshoring indices often measure something different than what we think they are. Using data from input-output tables of 21 European countries from 1995 to 2006 we decompose an offshoring index, distinguishing between a domestic (structural change) and an international component (imported inputs ratio). Regarding offshoring of business services, a large share of the index variation is driven by the domestic component. This is even more pronounced for overall service offshoring. In the case of material offshoring, by contrast, the international component drives the main variation of the indices. Our results therefore show that, regarding (business) services, the typical calculation of offshoring indices tends to over estimate the role of the imported inputs component, neglecting the role played by structural changes in the economy.
Resumo:
Mobile devices can enhance undergraduate research projects and students’ research capabilities. The use of mobile devices such as tablet computers will not automatically make undergraduates better researchers, but their use should make investigations, writing, and publishing more effective and may even save students time. We have explored some of the possibilities of using “tablets” and “smartphones” to aid the research and inquiry process in geography and bioscience fieldwork. We provide two case studies as illustration of how students working in small research groups use mobile devices to gather and analyze primary data in field-based inquiry. Since April 2010, Apple’s iPad has changed the way people behave in the digital world and how they access their music, watch videos, or read their email much as the entrepreneurs Steve Jobs and Jonathan Ive intended. Now with “apps” and “the cloud” and the ubiquitous references to them appearing in the press and on TV, academics’ use of tablets is also having an impact on education and research. In our discussion we will refer to use of smartphones such as the iPhone, iPod, and Android devices under the term “tablet”. Android and Microsoft devices may not offer the same facilities as the iPad/iphone, but many app producers now provide versions for several operating systems. Smartphones are becoming more affordable and ubiquitous (Melhuish and Falloon 2010), but a recent study of undergraduate students (Woodcock et al. 2012, 1) found that many students who own smartphones are “largely unaware of their potential to support learning”. Importantly, however, students were found to be “interested in and open to the potential as they become familiar with the possibilities” (Woodcock et al. 2012). Smartphones and iPads could be better utilized than laptops when conducting research in the field because of their portability (Welsh and France 2012). It is imperative for faculty to provide their students with opportunities to discover and employ the potential uses of mobile devices in their learning. However, it is not only the convenience of the iPad or tablet devices or smartphones we wish to promote, but also a way of thinking and behaving digitally. We essentially suggest that making a tablet the center of research increases the connections between related research activities.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.