968 resultados para Aggregate ichthyofauna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an outsourced database system the data owner publishes information through a number of remote, untrusted servers with the goal of enabling clients to access and query the data more efficiently. As clients cannot trust servers, query authentication is an essential component in any outsourced database system. Clients should be given the capability to verify that the answers provided by the servers are correct with respect to the actual data published by the owner. While existing work provides authentication techniques for selection and projection queries, there is a lack of techniques for authenticating aggregation queries. This article introduces the first known authenticated index structures for aggregation queries. First, we design an index that features good performance characteristics for static environments, where few or no updates occur to the data. Then, we extend these ideas and propose more involved structures for the dynamic case, where the database owner is allowed to update the data arbitrarily. Our structures feature excellent average case performance for authenticating queries with multiple aggregate attributes and multiple selection predicates. We also implement working prototypes of the proposed techniques and experimentally validate the correctness of our ideas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a thorough characterization of the access patterns in blogspace -- a fast-growing constituent of the content available through the Internet -- which comprises a rich interconnected web of blog postings and comments by an increasingly prominent user community that collectively define what has become known as the blogosphere. Our characterization of over 35 million read, write, and administrative requests spanning a 28-day period is done from three different blogosphere perspectives. The server view characterizes the aggregate access patterns of all users to all blogs; the user view characterizes how individual users interact with blogosphere objects (blogs); the object view characterizes how individual blogs are accessed. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different in blogspace than that observed in traditional web content. Access to objects in blogspace could be conceived as part of an interaction between an author and its readership. As we show in our work, such interactions range from one-to-many "broadcast-type" and many-to-one "registration-type" communication between an author and its readers, to multi-way, iterative "parlor-type" dialogues among members of an interest group. This more-interactive nature of the blogosphere leads to interesting traffic and communication patterns, which are different from those observed in traditional web content. Second, we identify and characterize novel features of the blogosphere workload, and we investigate the similarities and differences between typical web server workloads and blogosphere server workloads. Given the increasing share of blogspace traffic, understanding such differences is important for capacity planning and traffic engineering purposes, for example.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article develops the Synchronous Matching Adaptive Resonance Theory (SMART) neural model to explain how the brain may coordinate multiple levels of thalamocortical and corticocortical processing to rapidly learn, and stably remember, important information about a changing world. The model clarifies how bottom-up and top-down processes work together to realize this goal, notably how processes of learning, expectation, attention, resonance, and synchrony are coordinated. The model hereby clarifies, for the first time, how the following levels of brain organization coexist to realize cognitive processing properties that regulate fast learning and stable memory of brain representations: single cell properties, such as spiking dynamics, spike-timing-dependent plasticity (STDP), and acetylcholine modulation; detailed laminar thalamic and cortical circuit designs and their interactions; aggregate cell recordings, such as current-source densities and local field potentials; and single cell and large-scale inter-areal oscillations in the gamma and beta frequency domains. In particular, the model predicts how laminar circuits of multiple cortical areas interact with primary and higher-order specific thalamic nuclei and nonspecific thalamic nuclei to carry out attentive visual learning and information processing. The model simulates how synchronization of neuronal spiking occurs within and across brain regions, and triggers STDP. Matches between bottom-up adaptively filtered input patterns and learned top-down expectations cause gamma oscillations that support attention, resonance, and learning. Mismatches inhibit learning while causing beta oscillations during reset and hypothesis testing operations that are initiated in the deeper cortical layers. The generality of learned recognition codes is controlled by a vigilance process mediated by acetylcholine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How do our brains transform the "blooming buzzing confusion" of daily experience into a coherent sense of self that can learn and selectively attend to important information? How do local signals at multiple processing stages, none of which has a global view of brain dynamics or behavioral outcomes, trigger learning at multiple synaptic sites when appropriate, and prevent learning when inappropriate, to achieve useful behavioral goals in a continually changing world? How does the brain allow synaptic plasticity at a remarkably rapid rate, as anyone who has gone to an exciting movie is readily aware, yet also protect useful memories from catastrophic forgetting? A neural model provides a unified answer by explaining and quantitatively simulating data about single cell biophysics and neurophysiology, laminar neuroanatomy, aggregate cell recordings (current-source densities, local field potentials), large-scale oscillations (beta, gamma), and spike-timing dependent plasticity, and functionally linking them all to cognitive information processing requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Selective isoelectric whey protein precipitation and aggregation is carried out at laboratory scale in a standard configuration batch agitation vessel. Geometric scale-up of this operation is implemented on the basis of constant impeller power input per unit volume and subsequent clarification is achieved by high speed disc-stack centrifugation. Particle size and fractal geometry are important in achieving efficient separation while aggregates need to be strong enough to resist the more extreme levels of shear that are encountered during processing, for example through pumps, valves and at the centrifuge inlet zone. This study investigates how impeller agitation intensity and ageing time affect aggregate size, strength, fractal dimension and hindered settling rate at laboratory scale in order to determine conditions conducive for improved separation. Particle strength is measured by observing the effects of subjecting aggregates to moderate and high levels of process shear in a capillary rig and through a partially open ball-valve respectively. The protein precipitate yield is also investigated with respect to ageing time and impeller agitation intensity. A pilot scale study is undertaken to investigate scale-up and how agitation vessel shear affects centrifugal separation efficiency. Laboratory scale studies show that precipitates subject to higher impeller shear-rates during the addition of the precipitation agent are smaller but more compact than those subject to lower impeller agitation and are better able to resist turbulent breakage. They are thus more likely to provide a better feed for more efficient centrifugal separation. Protein precipitation yield improves significantly with ageing, and 50 minutes of ageing is required to obtain a 70 - 80% yield of α-lactalbumin. Geometric scale-up of the agitation vessel at constant power per unit volume results in aggregates of broadly similar size exhibiting similar trends but with some differences due to the absence of dynamic similarity due to longer circulation time and higher tip speed in the larger vessel. Disc stack centrifuge clarification efficiency curves show aggregates formed at higher shear-rates separate more efficiently, in accordance with laboratory scale projections. Exposure of aggregates to highly turbulent conditions, even for short exposure times, can lead to a large reduction in particle size. Thus, improving separation efficiencies can be achieved by the identification of high shear zones in a centrifugal process and the subsequent elimination or amelioration of such.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study has considered the optimisation of granola breakfast cereal manufacturing processes by wet granulation and pneumatic conveying. Granola is an aggregated food product used as a breakfast cereal and in cereal bars. Processing of granola involves mixing the dry ingredients (typically oats, nuts, etc.) followed by the addition of a binder which can contain honey, water and/or oil. In this work, the design and operation of two parallel wet granulation processes to produce aggregate granola products were incorporated: a) a high shear mixing granulation process followed by drying/toasting in an oven. b) a continuous fluidised bed followed by drying/toasting in an oven. In high shear granulation the influence of process parameters on key granule aggregate quality attributes such as granule size distribution and textural properties of granola were investigated. The experimental results show that the impeller rotational speed is the single most important process parameter which influences granola physical and textural properties. After that binder addition rate and wet massing time also show significant impacts on granule properties. Increasing the impeller speed and wet massing time increases the median granule size while also presenting a positive correlation with density. The combination of high impeller speed and low binder addition rate resulted in granules with the highest levels of hardness and crispness. In the fluidised bed granulation process the effect of nozzle air pressure and binder spray rate on key aggregate quality attributes were studied. The experimental results show that a decrease in nozzle air pressure leads to larger in mean granule size. The combination of lowest nozzle air pressure and lowest binder spray rate results in granules with the highest levels of hardness and crispness. Overall, the high shear granulation process led to larger, denser, less porous and stronger (less likely to break) aggregates than the fluidised bed process. The study also examined the particle breakage of granola during pneumatic conveying produced by both the high shear granulation and the fluidised bed granulation process. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. Particle breakage increases with applied pressure drop, and a 90° bend pipe results in more attrition for all conveying velocities relative to other pipe geometry. Additionally for the granules produced in the high shear granulator; those produced at the highest impeller speed, while being the largest also have the lowest levels of proportional breakage while smaller granules produced at the lowest impeller speed have the highest levels of breakage. This effect clearly shows the importance of shear history (during granule production) on breakage during subsequent processing. In terms of the fluidised bed granulation, there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. Finally, a simple power law breakage model based on process input parameters was developed for both manufacturing processes. It was found suitable for predicting the breakage of granola breakfast cereal at various applied air velocities using a number of pipe configurations, taking into account shear histories.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Countries across the world are being challenged to decarbonise their energy systems in response to diminishing fossil fuel reserves, rising GHG emissions and the dangerous threat of climate change. There has been a renewed interest in energy efficiency, renewable energy and low carbon energy as policy‐makers seek to identify and put in place the most robust sustainable energy system that can address this challenge. This thesis seeks to improve the evidence base underpinning energy policy decisions in Ireland with a particular focus on natural gas, which in 2011 grew to have a 30% share of Ireland’s TPER. Natural gas is used in all sectors of the Irish economy and is seen by many as a transition fuel to a low-carbon energy system; it is also a uniquely excellent source of data for many aspects of energy consumption. A detailed decomposition analysis of natural gas consumption in the residential sector quantifies many of the structural drives of change, with activity (R2 = 0.97) and intensity (R2 = 0.69) being the best explainers of changing gas demand. The 2002 residential building regulations are subject to an ex-post evaluation, which using empirical data finds a 44 ±9.5% shortfall in expected energy savings as well as a 13±1.6% level of non-compliance. A detailed energy demand model of the entire Irish energy system is presented together with scenario analysis of a large number of energy efficiency policies, which show an aggregate reduction in TFC of 8.9% compared to a reference scenario. The role for natural gas as a transition fuel over a long time horizon (2005-2050) is analysed using an energy systems model and a decomposition analysis, which shows the contribution of fuel switching to natural gas to be worth 12 percentage points of an overall 80% reduction in CO2 emissions. Finally, an analysis of the potential for CCS in Ireland finds gas CCS to be more robust than coal CCS for changes in fuel prices, capital costs and emissions reduction and the cost optimal location for a gas CCS plant in Ireland is found to be in Cork with sequestration in the depleted gas field of Kinsale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: This thesis examines a question posed by founding occupational scientist Dr. Elizabeth Yerxa (1993) – “what is the relationship between human engagement in a daily round of activity (such as work, play, rest and sleep) and the quality of life people experience including their healthfulness” (p. 3). Specifically, I consider Yerxa’s question in relation to the quotidian activities and health-related quality of life (HRQoL) of late adolescents (aged 15 - 19 years) in Ireland. This research enquiry was informed by an occupational perspective of health and by population health, ecological, and positive youth development perspectives. Methods: This thesis is comprised of five studies. Two scoping literature reviews informed the direction of three empirical studies. In the latter, cross-sectional time use and HRQoL data were collected from a representative sample of 731 school-going late adolescents (response rate 52%) across 28 schools across Cork city and county (response rate 76%). In addition to socio-demographic data, time use data were collected using a standard time diary instrument while a nationally and internationally validated instrument, the KIDSCREEN-52, was used to measure HRQoL. Variable-centred and person-centred analyses were used. Results: The scoping reviews identified the lack of research on well populations or an adolescent age range within occupational therapy and occupational science; limited research testing the popular assumption that time use is related to overall well-being and quality of life; and the absence of studies that examined adolescent 24-hour time use and quality of life. Established international trends were mirrored in the findings of the examination of weekday and weekend time use. Aggregate-level, variable-centred analyses yielded some significant associations between HRQoL and individual activities, independent of school year, school location, family context, social class, nationality or diary day. The person-centred analysis of overall time use identified three male profiles (productive, high leisure and all-rounder) and two female profiles (higher study/lower leisure and moderate study/higher leisure). There was tentative support for the association between higher HRQoL and more balanced time use profiles. Conclusion: The findings of this thesis highlight the gendered nature of adolescent time use and HRQoL. Participation in daily activities, singly and in combination, appears to be associated with HRQoL. However, the nature of this relationship is complex. Individually and collectively, adolescents need to be educated and supported to create health through their everyday patterns of doing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article examines the behavior of equity trading volume and volatility for the individual firms composing the Standard & Poor's 100 composite index. Using multivariate spectral methods, we find that fractionally integrated processes best describe the long-run temporal dependencies in both series. Consistent with a stylized mixture-of-distributions hypothesis model in which the aggregate "news"-arrival process possesses long-memory characteristics, the long-run hyperbolic decay rates appear to be common across each volume-volatility pair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human mesenchymal stem cells (hMSCs) and three-dimensional (3D) woven poly(ɛ-caprolactone) (PCL) scaffolds are promising tools for skeletal tissue engineering. We hypothesized that in vitro culture duration and medium additives can individually and interactively influence the structure, composition, mechanical, and molecular properties of engineered tissues based on hMSCs and 3D poly(ɛ-caprolactone). Bone marrow hMSCs were suspended in collagen gel, seeded on scaffolds, and cultured for 1, 21, or 45 days under chondrogenic and/or osteogenic conditions. Structure, composition, biomechanics, and gene expression were analyzed. In chondrogenic medium, cartilaginous tissue formed by day 21, and hypertrophic mineralization was observed in the newly formed extracellular matrix at the interface with underlying scaffold by day 45. Glycosaminoglycan, hydroxyproline, and calcium contents, and alkaline phosphatase activity depended on culture duration and medium additives, with significant interactive effects (all p < 0.0001). The 45-day constructs exhibited mechanical properties on the order of magnitude of native articular cartilage (aggregate, Young's, and shear moduli of 0.15, 0.12, and 0.033 MPa, respectively). Gene expression was characteristic of chondrogenesis and endochondral bone formation, with sequential regulation of Sox-9, collagen type II, aggrecan, core binding factor alpha 1 (Cbfα1)/Runx2, bone sialoprotein, bone morphogenetic protein-2, and osteocalcin. In contrast, osteogenic medium produced limited osteogenesis. Long-term culture of hMSC on 3D scaffolds resulted in chondrogenesis and regional mineralization at the interface between soft, newly formed engineered cartilage, and stiffer underlying scaffold. These findings merit consideration when developing grafts for osteochondral defect repair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that "commodity currency" exchange rates have surprisingly robust power in predicting global commodity prices, both in-sample and out-of-sample, and against a variety of alternative benchmarks. This result is of particular interest to policy makers, given the lack of deep forward markets in many individual commodities, and broad aggregate commodity indices in particular. We also explore the reverse relationship (commodity prices forecasting exchange rates) but find it to be notably less robust. We offer a theoretical resolution, based on the fact that exchange rates are strongly forward-looking, whereas commodity price fluctuations are typically more sensitive to short-term demand imbalances. © 2010 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2012 by Oxford University Press. All rights reserved.This article considers the determinants and effects of M&As in the pharmaceutical industry, with a particular focus on innovation and R&D productivity. As is the case in other industries, mergers in the pharmaceutical field are driven by a variety of company motives and conditions. These include defensive responses to industry shocks as well as more proactive rationales, such as economies of scale and scope, access to new technologies, and expansion to new markets. It is important to take account of firms' characteristics and motivations in evaluating merger performance, rather than using a broad aggregate brushstroke. Research to date on pharmaceuticals suggests considerable variation in both motivation and outcomes. From an antitrust policy standpoint, the larger horizontal mergers in pharmaceuticals have run into few challenges from regulatory authorities in the United States and the European Union, given the option to spin off competing therapeutic products to other drug firms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We estimate a carbon mitigation cost curve for the U.S. commercial sector based on econometric estimation of the responsiveness of fuel demand and equipment choices to energy price changes. The model econometrically estimates fuel demand conditional on fuel choice, which is characterized by a multinomial logit model. Separate estimation of end uses (e.g., heating, cooking) using the U.S. Commercial Buildings Energy Consumption Survey allows for exceptionally detailed estimation of price responsiveness disaggregated by end use and fuel type. We then construct aggregate long-run elasticities, by fuel type, through a series of simulations; own-price elasticities range from -0.9 for district heat services to -2.9 for fuel oil. The simulations form the basis of a marginal cost curve for carbon mitigation, which suggests that a price of $20 per ton of carbon would result in an 8% reduction in commercial carbon emissions, and a price of $100 per ton would result in a 28% reduction. © 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Over the past two decades, genomics has evolved as a scientific research discipline. Genomics research was fueled initially by government and nonprofit funding sources, later augmented by private research and development (R&D) funding. Citizens and taxpayers of many countries have funded much of the research, and have expectations about access to the resulting information and knowledge. While access to knowledge gained from all publicly funded research is desired, access is especially important for fields that have broad social impact and stimulate public dialogue. Genomics is one such field, where public concerns are raised for reasons such as health care and insurance implications, as well as personal and ancestral identification. Thus, genomics has grown rapidly as a field, and attracts considerable interest. RESULTS: One way to study the growth of a field of research is to examine its funding. This study focuses on public funding of genomics research, identifying and collecting data from major government and nonprofit organizations around the world, and updating previous estimates of world genomics research funding, including information about geographical origins. We initially identified 89 publicly funded organizations; we requested information about each organization's funding of genomics research. Of these organizations, 48 responded and 34 reported genomics research expenditures (of those that responded but did not supply information, some did not fund such research, others could not quantify it). The figures reported here include all the largest funders and we estimate that we have accounted for most of the genomics research funding from government and nonprofit sources. CONCLUSION: Aggregate spending on genomics research from 34 funding sources averaged around $2.9 billion in 2003-2006. The United States spent more than any other country on genomics research, corresponding to 35% of the overall worldwide public funding (compared to 49% US share of public health research funding for all purposes). When adjusted to genomics funding intensity, however, the United States dropped below Ireland, the United Kingdom, and Canada, as measured both by genomics research expenditure per capita and per Gross Domestic Product.