89 resultados para Data Streams Distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Explanations for the causes of famine and food insecurity often reside at a high level of aggregation or abstraction. Popular models within famine studies have often emphasised the role of prime movers such as population stress, or the political-economic structure of access channels, as key determinants of food security. Explanation typically resides at the macro level, obscuring the presence of substantial within-country differences in the manner in which such stressors operate. This study offers an alternative approach to analyse the uneven nature of food security, drawing on the Great Irish famine of 1845–1852. Ireland is often viewed as a classical case of Malthusian stress, whereby population outstripped food supply under a pre-famine demographic regime of expanded fertility. Many have also pointed to Ireland's integration with capitalist markets through its colonial relationship with the British state, and country-wide system of landlordism, as key determinants of local agricultural activity. Such models are misguided, ignoring both substantial complexities in regional demography, and the continuity of non-capitalistic, communal modes of land management long into the nineteenth century. Drawing on resilience ecology and complexity theory, this paper subjects a set of aggregate data on pre-famine Ireland to an optimisation clustering procedure, in order to discern the potential presence of distinctive social–ecological regimes. Based on measures of demography, social structure, geography, and land tenure, this typology reveals substantial internal variation in regional social–ecological structure, and vastly differing levels of distress during the peak famine months. This exercise calls into question the validity of accounts which emphasise uniformity of structure, by revealing a variety of regional regimes, which profoundly mediated local conditions of food security. Future research should therefore consider the potential presence of internal variations in resilience and risk exposure, rather than seeking to characterise cases based on singular macro-dynamics and stressors alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power output from a wave energy converter is typically predicted using experimental and/or numerical modelling techniques. In order to yield meaningful results the relevant characteristics of the device, together with those of the wave climate must be modelled with sufficient accuracy.

The wave climate is commonly described using a scatter table of sea states defined according to parameters related to wave height and period. These sea states are traditionally modelled with the spectral distribution of energy defined according to some empirical formulation. Since the response of most wave energy converters vary at different frequencies of excitation, their performance in a particular sea state may be expected to depend on the choice of spectral shape employed rather than simply the spectral parameters. Estimates of energy production may therefore be affected if the spectral distribution of wave energy at the deployment site is not well modelled. Furthermore, validation of the model may be affected by differences between the observed full scale spectral energy distribution and the spectrum used to model it.

This paper investigates the sensitivity of the performance of a bottom hinged flap type wave energy converter to the spectral energy distribution of the incident waves. This is investigated experimentally using a 1:20 scale model of Aquamarine Power’s Oyster wave energy converter, a bottom hinged flap type device situated at the European Marine Energy Centre (EMEC) in approximately 13m water depth. The performance of the model is tested in sea states defined according to the same wave height and period parameters but adhering to different spectral energy distributions.

The results of these tests show that power capture is reduced with increasing spectral bandwidth. This result is explored with consideration of the spectral response of the device in irregular wave conditions. The implications of this result are discussed in the context of validation of the model against particular prototype data sets and estimation of annual energy production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying heterogeneous aquifer systems, especially at regional scale, a degree of generalization is anticipated. This can be due to sparse sampling regimes, complex depositional environments or lack of accessibility to measure the subsurface. This can lead to an inaccurate conceptualization which can be detrimental when applied to groundwater flow models. It is important that numerical models are based on observed and accurate geological information and do not rely on the distribution of artificial aquifer properties. This can still be problematic as data will be modelled at a different scale to which it was collected. It is proposed here that integrating geophysics and upscaling techniques can assist in a more realistic and deterministic groundwater flow model. In this study, the sedimentary aquifer of the Lagan Valley in Northern Ireland is chosen due to intruding sub-vertical dolerite dykes. These dykes are of a lower permeability than the sandstone aquifer. The use of airborne magnetics allows the delineation of heterogeneities, confirmed by field analysis. Permeability measured at the field scale is then upscaled to different levels using a correlation with the geophysical data, creating equivalent parameters that can be directly imported into numerical groundwater flow models. These parameters include directional equivalent permeabilities and anisotropy. Several stages of upscaling are modelled in finite element. Initial modelling is providing promising results, especially at the intermediate scale, suggesting an accurate distribution of aquifer properties. This deterministic based methodology is being expanded to include stochastic methods of obtaining heterogeneity location based on airborne geophysical data. This is through the Direct Sample method of Multiple-Point Statistics (MPS). This method uses the magnetics as a training image to computationally determine a probabilistic occurrence of heterogeneity. There is also a need to apply the method to alternate geological contexts where the heterogeneity is of a higher permeability than the host rock.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present results from SEPPCoN, an on-going Survey of the Ensemble Physical Properties of Cometary Nuclei. In this report we discuss mid-infrared measurements of the thermal emission from 89 nuclei of Jupiter-family comets (JFCs). All data were obtained in 2006 and 2007 using imaging capabilities of the Spitzer Space Telescope. The comets were typically 4-5 AU from the Sun when observed and most showed only a point-source with little or no extended emission from dust. For those comets showing dust, we used image processing to photometrically extract the nuclei. For all 89 comets, we present new effective radii, and for 57 comets we present beaming parameters. Thus our survey provides the largest compilation of radiometrically-derived physical properties of nuclei to date. We have six main conclusions: (a) The average beaming parameter of the JFC population is 1.03 ± 0.11, consistent with unity; coupled with the large distance of the nuclei from the Sun, this indicates that most nuclei have Tempel 1-like thermal inertia. Only two of the 57 nuclei had outlying values (in a statistical sense) of infrared beaming. (b) The known JFC population is not complete even at 3 km radius, and even for comets that approach to ˜2 AU from the Sun and so ought to be more discoverable. Several recently-discovered comets in our survey have small perihelia and large (above ˜2 km) radii. (c) With our radii, we derive an independent estimate of the JFC nuclear cumulative size distribution (CSD), and we find that it has a power-law slope of around -1.9, with the exact value depending on the bounds in radius. (d) This power-law is close to that derived by others from visible-wavelength observations that assume a fixed geometric albedo, suggesting that there is no strong dependence of geometric albedo with radius. (e) The observed CSD shows a hint of structure with an excess of comets with radii 3-6 km. (f) Our CSD is consistent with the idea that the intrinsic size distribution of the JFC population is not a simple power-law and lacks many sub-kilometer objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In highly heterogeneous aquifer systems, conceptualization of regional groundwater flow models frequently results in the generalization or negligence of aquifer heterogeneities, both of which may result in erroneous model outputs. The calculation of equivalence related to hydrogeological parameters and applied to upscaling provides a means of accounting for measurement scale information but at regional scale. In this study, the Permo-Triassic Lagan Valley strategic aquifer in Northern Ireland is observed to be heterogeneous, if not discontinuous, due to subvertical trending low-permeability Tertiary dolerite dykes. Interpretation of ground and aerial magnetic surveys produces a deterministic solution to dyke locations. By measuring relative permeabilities of both the dykes and the sedimentary host rock, equivalent directional permeabilities, that determine anisotropy calculated as a function of dyke density, are obtained. This provides parameters for larger scale equivalent blocks, which can be directly imported to numerical groundwater flow models. Different conceptual models with different degrees of upscaling are numerically tested and results compared to regional flow observations. Simulation results show that the upscaled permeabilities from geophysical data allow one to properly account for the observed spatial variations of groundwater flow, without requiring artificial distribution of aquifer properties. It is also found that an intermediate degree of upscaling, between accounting for mapped field-scale dykes and accounting for one regional anisotropy value (maximum upscaling) provides results the closest to the observations at the regional scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geogenic nickel (Ni), vanadium (V) and chromium (Cr) are present at elevated levels in soils in Northern Ireland. Whilst Ni, V and Cr total soil concentrations share common geological origins, their respective levels of oral bioaccessibility are influenced by different soil-geochemical factors. Oral bioaccessibility extractions were carried out on 145 soil samples overlying 9 different bedrock types to measure the bioaccessible portions of Ni, V and Cr. Principal component analysis identified two components (PC1 and PC2) accounting for 69% of variance across 13 variables from the Northern Ireland Tellus Survey geochemical data. PC1 was associated with underlying basalt bedrock, higher bioaccessible Cr concentrations and lower Ni bioaccessibility. PC2 was associated with regional variance in soil chemistry and hosted factors accounting for higher Ni and V bioaccessibility. Eight per cent of total V was solubilised by gastric extraction on average across the study area. High median proportions of bioaccessible Ni were observed in soils overlying sedimentary rock types. Whilst Cr bioaccessible fractions were low (max = 5.4%), the highest measured bioaccessible Cr concentration reached 10.0 mg kg-1, explained by factors linked to PC1 including high total Cr concentrations in soils overlying basalt bedrock.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The new Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2011 document recommends a combined assessment of chronic obstructive pulmonary disease (COPD) based on current symptoms and future risk.

A large database of primary-care COPD patients across the UK was used to determine COPD distribution and characteristics according to the new GOLD classification. 80 general practices provided patients with a Read code diagnosis of COPD. Electronic and hand searches of patient medical records were undertaken, optimising data capture.

Data for 9219 COPD patients were collected. For the 6283 patients with both forced expiratory volume in 1 s (FEV1) and modified Medical Research Council scores (mean¡SD age 69.2¡10.6 years, body mass index 27.3¡6.2 kg?m-2), GOLD 2011 group distributions were: A (low risk and fewer symptoms) 36.1%, B (low risk and more symptoms) 19.1%, C (high risk and fewer symptoms) 19.6% and D (high risk and more symptoms) 25.3%. This is in contrast with GOLD 2007 stage classification: I (mild) 17.1%, II (moderate) 52.2%, III (severe) 25.5% and IV (very severe) 5.2%. 20% of patients with FEV1 o50% predicted had more than two exacerbations in the previous 12 months. 70% of patients with FEV1 ,50% pred had fewer than two exacerbations in the previous 12 months.

This database, representative of UK primary-care COPD patients, identified greater proportions of patients in the mildest and most severe categories upon comparing 2011 versus 2007 GOLD classifications. Discordance between airflow limitation severity and exacerbation risk was observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The motor points of the skeletal muscles, mainly of interest to anatomists and physiologists, have recently attracted much attention from researchers in the field of functional electrical stimulation. The muscle motor point has been defined as the entry point of the motor nerve branch into the epimysium of the muscle belly. Anatomists have pointed out that many muscles in the limbs have multiple motor points. Knowledge of the location of nerve branches and terminal nerve entry points facilitates the exact insertion and the suitable selection of the number of electrodes required for each muscle for functional electrical stimulation. The present work therefore aimed to describe the number, location, and distribution of motor points in the human forearm muscles to obtain optimal hand function in many clinical situations. Twenty three adult human cadaveric forearms were dissected. The numbers of primary nerves and motor points for each muscle were tabulated. The mean numbers and the standard deviation were calculated and grouped in tables. Data analyses were performed with the use of a statistical analysis package (SPSS 13.0). The proximal third of the muscle was the usual part of the muscle that received the motor points. Most of the forearm muscles were innervated from the lateral side and deep surface of the muscle. The information in this study may also be usefully applied in selective denervation procedures to balance muscles in spastic upper limbs. Copyright © 2007 Via Medica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, mobile phones and mobile devices using mobile cellular telecommunication network connections have become ubiquitous. In several developed countries, the penetration of such devices has surpassed 100 percent. They facilitate communication and access to large quantities of data without the requirement of a fixed location or connection. Assuming mobile phones usually are in close proximity with the user, their cellular activities and locations are indicative of the user's activities and movements. As such, those cellular devices may be considered as a large scale distributed human activity sensing platform. This paper uses mobile operator telephony data to visualize the regional flows of people across the Republic of Ireland. In addition, the use of modified Markov chains for the ranking of significant regions of interest to mobile subscribers is investigated. Methodology is then presented which demonstrates how the ranking of significant regions of interest may be used to estimate national population, results of which are found to have strong correlation with census data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of learning from imbalanced data is of critical importance in a large number of application domains and can be a bottleneck in the performance of various conventional learning methods that assume the data distribution to be balanced. The class imbalance problem corresponds to dealing with the situation where one class massively outnumbers the other. The imbalance between majority and minority would lead machine learning to be biased and produce unreliable outcomes if the imbalanced data is used directly. There has been increasing interest in this research area and a number of algorithms have been developed. However, independent evaluation of the algorithms is limited. This paper aims at evaluating the performance of five representative data sampling methods namely SMOTE, ADASYN, BorderlineSMOTE, SMOTETomek and RUSBoost that deal with class imbalance problems. A comparative study is conducted and the performance of each method is critically analysed in terms of assessment metrics. © 2013 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identifying groundwater contributions to baseflowforms an essential part of surfacewater body characterisation. The Gortinlieve catchment (5 km2) comprises a headwater stream network of the Carrigans River, itself a tributary of the River Foyle, NW Ireland. The bedrock comprises poorly productive metasediments that are characterised by fracture porosity. We present the findings of a multi-disciplinary study that integrates new hydrochemical and mineralogical investigations with existing hydraulic, geophysical and structural data to identify the scales of groundwater flow and the nature of groundwater/bedrock interaction (chemical denudation). At the catchment scale, the development of deep weathering profiles is controlled by NE-SW regional scale fracture zones associated with mountain building during the Grampian orogeny. In-situ chemical denudation of mineral phases is controlled by micro- to meso-scale fractures related to Alpine compression during Palaeocene to Oligocene times. The alteration of primary muscovite, chlorite (clinochlore) and albite along the surfaces of these small-scale fractures has resulted in the precipitation of illite, montmorillonite and illite/montmorillonite clay admixtures. The interconnected but discontinuous nature of these small-scale structures highlights the role of larger scale faults and fissures in the supply and transportation of weathering solutions to/from the sites of mineral weathering. The dissolution of primarily mineral phases releases the major ions Mg, Ca and HCO3 that are shown to subsequently formthe chemical makeup of groundwaters. Borehole groundwater and stream baseflow hydrochemical data are used to constrain the depths of groundwater flow pathways influencing the chemistry of surface waters throughout the stream profile. The results show that it is predominantly the lower part of the catchment, which receives inputs from catchment/regional scale groundwater flow, that is found to contribute to the maintenance of annual baseflow levels. This study identifies the importance
of deep groundwater in maintaining annual baseflow levels in poorly productive bedrock systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate definition of the extreme wave loads which act on offshore structures represents a significant challenge for design engineers and even with decades of empirical data to base designs upon there are still failures attributed to wave loading. The environmental conditions which cause these loads are infrequent and highly non-linear which means that they are not well understood or simple to describe. If the structure is large enough to affect the incident wave significantly further non-linear effects can influence the loading. Moreover if the structure is floating and excited by the wave field then its responses, which are also likely to be highly non-linear, must be included in the analysis. This makes the description of the loading on such a structure difficult to determine and the design codes will often suggest employing various tools including small scale experiments, numerical and analytical methods, as well as empirical data if available.
Wave Energy Converters (WECs) are a new class of offshore structure which pose new design challenges, lacking the design codes and empirical data found in other industries. These machines are located in highly exposed and energetic sites, designed to be excited by the waves and will be expected to withstand extreme conditions over their 25 year design life. One such WEC is being developed by Aquamarine Power Ltd and is called Oyster. Oyster is a buoyant flap which is hinged close to the seabed, in water depths of 10 to 15m, piercing the water surface. The flap is driven back and forth by the action of the waves and this mechanical energy is then converted to electricity.
It has been identified in previous experiments that Oyster is not only subject to wave impacts but it occasionally slams into the water surface with high angular velocity. This slamming effect has been identified as an extreme load case and work is ongoing to describe it in terms of the pressure exerted on the outer skin and the transfer of this short duration impulsive load through various parts of the structure.
This paper describes a series of 40th scale experiments undertaken to investigate the pressure on the face of the flap during the slamming event. A vertical array of pressure sensors are used to measure the pressure exerted on the flap. Characteristics of the slam pressure such as the rise time, magnitude, spatial distribution and temporal evolution are revealed. Similarities are drawn between this slamming phenomenon and the classical water entry problems, such as ship hull slamming. With this similitude identified, common analytical tools are used to predict the slam pressure which is compared to that measured in the experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nematode neuropeptide systems comprise an exceptionally complex array of similar to 250 peptidic signaling molecules that operate within a structurally simple nervous system of similar to 300 neurons. A relatively complete picture of the neuropeptide complement is available for Caenorhabditis elegans, with 30 flp, 38 ins and 43 nlp genes having been documented; accumulating evidence indicates similar complexity in parasitic nematodes from clades I, III, IV and V. In contrast, the picture for parasitic platyhelminths is less clear, with the limited peptide sequence data available providing concrete evidence for only FMRFamide-like peptide (FLP) and neuropeptide F (NPF) signaling systems, each of which only comprises one or two peptides. With the completion of the Schmidtea meditteranea and Schistosoma mansoni genome projects and expressed sequence tag datasets for other flatworm parasites becoming available, the time is ripe for a detailed reanalysis of neuropeptide signaling in flatworms. Although the actual neuropeptides provide limited obvious value as targets for chemotherapeutic-based control strategies, they do highlight the signaling systems present in these helminths and provide tools for the discovery of more amenable targets such as neuropeptide receptors or neuropeptide processing enzymes. Also, they offer opportunities to evaluate the potential of their associated signaling pathways as targets through RNA interference (RNAi)-based, target validation strategies. Currently, within both helminth phyla, the flp signaling systems appear to merit further investigation as they are intrinsically linked with motor function, a proven target for successful anti-parasitics; it is clear that some nematode NLPs also play a role in motor function and could have similar appeal. At this time, it is unclear if flatworm NPF and nematode INS peptides operate in pathways that have utility for parasite control. Clearly, RNAi-based validation could be a starting point for scoring potential target pathways within neuropeptide signaling for parasiticide discovery programs. Also, recent successes in the application of in planta-based RNAi control strategies for plant parasitic nematodes reveal a strategy whereby neuropeptide encoding genes could become targets for parasite control. The possibility of developing these approaches for the control of animal and human parasites is intriguing, but will require significant advances in the delivery of RNAi-triggers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research over the past two decades on the Holocene sediments from the tide dominated west side of the lower Ganges delta has focussed on constraining the sedimentary environment through grain size distributions (GSD). GSD has traditionally been assessed through the use of probability density function (PDF) methods (e.g. log-normal, log skew-Laplace functions), but these approaches do not acknowledge the compositional nature of the data, which may compromise outcomes in lithofacies interpretations. The use of PDF approaches in GSD analysis poses a series of challenges for the development of lithofacies models, such as equifinal distribution coefficients and obscuring the empirical data variability. In this study a methodological framework for characterising GSD is presented through compositional data analysis (CODA) plus a multivariate statistical framework. This provides a statistically robust analysis of the fine tidal estuary sediments from the West Bengal Sundarbans, relative to alternative PDF approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers inference from multinomial data and addresses the problem of choosing the strength of the Dirichlet prior under a mean-squared error criterion. We compare the Maxi-mum Likelihood Estimator (MLE) and the most commonly used Bayesian estimators obtained by assuming a prior Dirichlet distribution with non-informative prior parameters, that is, the parameters of the Dirichlet are equal and altogether sum up to the so called strength of the prior. Under this criterion, MLE becomes more preferable than the Bayesian estimators at the increase of the number of categories k of the multinomial, because non-informative Bayesian estimators induce a region where they are dominant that quickly shrinks with the increase of k. This can be avoided if the strength of the prior is not kept constant but decreased with the number of categories. We argue that the strength should decrease at least k times faster than usual estimators do.