880 resultados para one-to-many mapping
Resumo:
Queens of many social insect species are known to maintain reproductive monopoly by pheromonal signalling of fecundity. Queens of the primitively eusocial wasp Ropalidia marginata appear to do so using secretions from their Dufour's glands, whose hydrocarbon composition is correlated with fertility. Solitary nest foundresses of R. marginata are without nestmates; hence expressing a queen signal can be redundant, since there is no one to receive the signal. But if queen pheromone is an honest signal inextricably linked with fertility, it should correlate with fertility and be expressed irrespective of the presence or absence of receivers of the signal, by virtue of being a byproduct of the state of fertility. Hence we compared the Dufour's gland hydrocarbons and ovaries of solitary foundresses with queens and workers of post-emergence nests. Our results suggest that queen pheromone composition in R. marginata is a byproduct of fertility and hence can honestly signal fertility. This provides important new evidence for the honest signalling hypothesis.
Resumo:
This thesis is mainly concerned with the application of groups of transformations to differential equations and in particular with the connection between the group structure of a given equation and the existence of exact solutions and conservation laws. In this respect the Lie-Bäcklund groups of tangent transformations, particular cases of which are the Lie tangent and the Lie point groups, are extensively used.
In Chapter I we first review the classical results of Lie, Bäcklund and Bianchi as well as the more recent ones due mainly to Ovsjannikov. We then concentrate on the Lie-Bäcklund groups (or more precisely on the corresponding Lie-Bäcklund operators), as introduced by Ibragimov and Anderson, and prove some lemmas about them which are useful for the following chapters. Finally we introduce the concept of a conditionally admissible operator (as opposed to an admissible one) and show how this can be used to generate exact solutions.
In Chapter II we establish the group nature of all separable solutions and conserved quantities in classical mechanics by analyzing the group structure of the Hamilton-Jacobi equation. It is shown that consideration of only Lie point groups is insufficient. For this purpose a special type of Lie-Bäcklund groups, those equivalent to Lie tangent groups, is used. It is also shown how these generalized groups induce Lie point groups on Hamilton's equations. The generalization of the above results to any first order equation, where the dependent variable does not appear explicitly, is obvious. In the second part of this chapter we investigate admissible operators (or equivalently constants of motion) of the Hamilton-Jacobi equation with polynornial dependence on the momenta. The form of the most general constant of motion linear, quadratic and cubic in the momenta is explicitly found. Emphasis is given to the quadratic case, where the particular case of a fixed (say zero) energy state is also considered; it is shown that in the latter case additional symmetries may appear. Finally, some potentials of physical interest admitting higher symmetries are considered. These include potentials due to two centers and limiting cases thereof. The most general two-center potential admitting a quadratic constant of motion is obtained, as well as the corresponding invariant. Also some new cubic invariants are found.
In Chapter III we first establish the group nature of all separable solutions of any linear, homogeneous equation. We then concentrate on the Schrodinger equation and look for an algorithm which generates a quantum invariant from a classical one. The problem of an isomorphism between functions in classical observables and quantum observables is studied concretely and constructively. For functions at most quadratic in the momenta an isomorphism is possible which agrees with Weyl' s transform and which takes invariants into invariants. It is not possible to extend the isomorphism indefinitely. The requirement that an invariant goes into an invariant may necessitate variants of Weyl' s transform. This is illustrated for the case of cubic invariants. Finally, the case of a specific value of energy is considered; in this case Weyl's transform does not yield an isomorphism even for the quadratic case. However, for this case a correspondence mapping a classical invariant to a quantum orie is explicitly found.
Chapters IV and V are concerned with the general group structure of evolution equations. In Chapter IV we establish a one to one correspondence between admissible Lie-Bäcklund operators of evolution equations (derivable from a variational principle) and conservation laws of these equations. This correspondence takes the form of a simple algorithm.
In Chapter V we first establish the group nature of all Bäcklund transformations (BT) by proving that any solution generated by a BT is invariant under the action of some conditionally admissible operator. We then use an algorithm based on invariance criteria to rederive many known BT and to derive some new ones. Finally, we propose a generalization of BT which, among other advantages, clarifies the connection between the wave-train solution and a BT in the sense that, a BT may be thought of as a variation of parameters of some. special case of the wave-train solution (usually the solitary wave one). Some open problems are indicated.
Most of the material of Chapters II and III is contained in [I], [II], [III] and [IV] and the first part of Chapter V in [V].
Resumo:
Surface mass loads come in many different varieties, including the oceans, atmosphere, rivers, lakes, glaciers, ice caps, and snow fields. The loads migrate over Earth's surface on time scales that range from less than a day to many thousand years. The weights of the shifting loads exert normal forces on Earth's surface. Since the Earth is not perfectly rigid, the applied pressure deforms the shape of the solid Earth in a manner controlled by the material properties of Earth's interior. One of the most prominent types of surface mass loading, ocean tidal loading (OTL), comes from the periodic rise and fall in sea-surface height due to the gravitational influence of celestial objects, such as the moon and sun. Depending on geographic location, the surface displacements induced by OTL typically range from millimeters to several centimeters in amplitude, which may be inferred from Global Navigation and Satellite System (GNSS) measurements with sub-millimeter precision. Spatiotemporal characteristics of observed OTL-induced surface displacements may therefore be exploited to probe Earth structure. In this thesis, I present descriptions of contemporary observational and modeling techniques used to explore Earth's deformation response to OTL and other varieties of surface mass loading. With the aim to extract information about Earth's density and elastic structure from observations of the response to OTL, I investigate the sensitivity of OTL-induced surface displacements to perturbations in the material structure. As a case study, I compute and compare the observed and predicted OTL-induced surface displacements for a network of GNSS receivers across South America. The residuals in three distinct and dominant tidal bands are sub-millimeter in amplitude, indicating that modern ocean-tide and elastic-Earth models well predict the observed displacement response in that region. Nevertheless, the sub-millimeter residuals exhibit regional spatial coherency that cannot be explained entirely by random observational uncertainties and that suggests deficiencies in the forward-model assumptions. In particular, the discrepancies may reveal sensitivities to deviations from spherically symmetric, non-rotating, elastic, and isotropic (SNREI) Earth structure due to the presence of the South American craton.
Resumo:
Políticas públicas são estruturadas com a finalidade de ser uma resposta dada pelo poder público para as diversas demandas, problemas e tensões geradas na sociedade. Devem ter magnitude e relevância social, bem como possuir poder de barganha suficiente para fazer parte da agenda de prioridades de um determinado órgão fomentador de políticas. Desta forma, uma política é constituída pelo seu propósito, diretrizes e definição de responsabilidades das esferas de Governo e dos órgãos envolvidos. Assim, a política de medicamentos brasileira, inserida na Política de Saúde, constitui um dos elementos fundamentais para a implementação de ações capazes de promover melhoria nas condições de saúde. Preconiza a garantia da disponibilidade, do acesso e do uso racional de medicamentos por todos os setores da população, conforme seu perfil de morbimortalidade. Nessa perspectiva, o presente trabalho pretendeu fazer uma análise da Política Nacional de Medicamentos (PNM) para compreender os dados encontrados. Com base na abordagem qualitativa, levando em consideração o que explicita o documento fundador da PNM, além de uma revisão da literatura foram feitos o mapeamento e a análise dos referidos dados, gerando categorias (contexto, conteúdo e processos envolvidos). Este estudo permitiu concluir que a PNM não abrange muitos dos problemas relacionados ao uso do medicamento, como também não conseguiu ferramentas suficientes para dar todas as respostas governamentais necessárias para muitos dos problemas por ela levantados ou até mesmo daqueles existentes e que não foram por ela contemplados. Os governos, tanto o que a formulou quanto os que o sucederam, avançaram em suas diretrizes ou continuam envidando esforços para tal, no sentido de contribuir para a efetivação do direito à assistência terapêutica integral.
Resumo:
Two large hydrologic issues face the Kings Basin, severe and chronic overdraft of about 0.16M ac-ft annually, and flood risks along the Kings River and the downstream San Joaquin River. Since 1983, these floods have caused over $1B in damage in today’s dollars. Capturing flood flows of sufficient volume could help address these two pressing issues which are relevant to many regions of the Central Valley and will only be exacerbated with climate change. However, the Kings River has high variability associated with flow magnitudes which suggests that standard engineering approaches and acquisition of sufficient acreage through purchase and easements to capture and recharge flood waters would not be cost effective. An alternative approach investigated in this study, termed On-Farm Flood Flow Capture, involved leveraging large areas of private farmland to capture flood flows for both direct and in lieu recharge. This study investigated the technical and logistical feasibility of best management practices (BMPs) associated with On-Farm Flood Flow Capture. The investigation was conducted near Helm, CA, about 20 miles west of Fresno, CA. The experimental design identified a coordinated plan to determine infiltration rates for different soil series and different crops; develop a water budget for water applied throughout the program and estimate direct and in lieu recharge; provide a preliminary assessment of potential water quality impacts; assess logistical issues associated with implementation; and provide an economic summary of the program. At check locations, we measured average infiltration rates of 4.2 in/d for all fields and noted that infiltration rates decreased asymptotically over time to about 2 – 2.5 in/d. Rates did not differ significantly between the different crops and soils tested, but were found to be about an order of magnitude higher in one field. At a 2.5 in/d infiltration rate, 100 acres are required to infiltrate 10 CFS of captured flood flows. Water quality of applied flood flows from the Kings River had concentrations of COC (constituents of concern; i.e. nitrate, electrical conductivity or EC, phosphate, ammonium, total dissolved solids or TDS) one order of magnitude or more lower than for pumped groundwater at Terranova Ranch and similarly for a broader survey of regional groundwater. Applied flood flows flushed the root zone and upper vadose zone of nitrate and salts, leading to much lower EC and nitrate concentrations to a depth of 8 feet when compared to fields in which more limited flood flows were applied or for which drip irrigation with groundwater was the sole water source. In demonstrating this technology on the farm, approximately 3,100 ac-ft was diverted, primarily from April through mid-July, with about 70% towards in lieu and 30% towards direct recharge. Substantial flood flow volumes were applied to alfalfa, wine grapes and pistachio fields. A subset of those fields, primarily wine grapes and pistachios, were used primarily to demonstrate direct recharge. For those fields about 50 – 75% of water applied was calculated going to direct recharge. Data from the check studies suggests more flood flows could have been applied and infiltrated, effectively driving up the amount of water towards direct recharge. Costs to capture flood flows for in lieu and direct recharge for this project were low compared to recharge costs for other nearby systems and in comparison to irrigating with groundwater. Moreover, the potentially high flood capture capacity of this project suggests significant flood avoidance costs savings to downstream communities along the Kings and San Joaquin Rivers. Our analyses for Terranova Ranch suggest that allocating 25% or more flood flow water towards in lieu recharge and the rest toward direct recharge will result in an economically sustainable recharge approach paid through savings from reduced groundwater pumping. Two important issues need further consideration. First, these practices are likely to leach legacy salts and nitrates from the unsaturated zone into groundwater. We develop a conceptual model of EC movement through the unsaturated zone and estimated through mass balance calculations that approximately 10 kilograms per square meter of salts will be flushed into the groundwater through displacing 12 cubic meters per square meter of unsaturated zone pore water. This flux would increase groundwater salinity but an equivalent amount of water added subsequently is predicted as needed to return to current groundwater salinity levels. All subsequent flood flow capture and recharge is expected to further decrease groundwater salinity levels. Second, the project identified important farm-scale logistical issues including irrigator training; developing cropping plans to integrate farming and recharge activities; upgrading conveyance; and quantifying results. Regional logistical issues also exist related to conveyance, integration with agricultural management, economics, required acreage and Operation and Maintenance (O&M).
Resumo:
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman's coalescent, Dirichlet diffusion trees and Wishart processes.
Resumo:
Urquhart, C. (2006). From epistemic origins to journal impact factors: what do citations tell us? International Journal of Nursing Studies, 43(1), 1-2.
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals.
Resumo:
BACKGROUND: In the domain of academia, the scholarship of research may include, but not limited to, peer-reviewed publications, presentations, or grant submissions. Programmatic research productivity is one of many measures of academic program reputation and ranking. Another measure or tool for quantifying learning success among physical therapists education programs in the USA is 100 % three year pass rates of graduates on the standardized National Physical Therapy Examination (NPTE). In this study, we endeavored to determine if there was an association between research productivity through artifacts and 100 % three year pass rates on the NPTE. METHODS: This observational study involved using pre-approved database exploration representing all accredited programs in the USA who graduated physical therapists during 2009, 2010 and 2011. Descriptive variables captured included raw research productivity artifacts such as peer reviewed publications and books, number of professional presentations, number of scholarly submissions, total grant dollars, and numbers of grants submitted. Descriptive statistics and comparisons (using chi square and t-tests) among program characteristics and research artifacts were calculated. Univariate logistic regression analyses, with appropriate control variables were used to determine associations between research artifacts and 100 % pass rates. RESULTS: Number of scholarly artifacts submitted, faculty with grants, and grant proposals submitted were significantly higher in programs with 100 % three year pass rates. However, after controlling for program characteristics such as grade point average, diversity percentage of cohort, public/private institution, and number of faculty, there were no significant associations between scholarly artifacts and 100 % three year pass rates. CONCLUSIONS: Factors outside of research artifacts are likely better predictors for passing the NPTE.
Resumo:
This article uses attitudinal data to explore Catholic and Protestant perspectives on community relations and equality since the paramilitary cease fires in 1994. Although attitudes tend to fluctuate with the‘headline grabbing'events of the day, the article argues that there are signs that some fundamental changes have taken place in the post cease fire period. Of particular importance in this regard is the positive response recorded by the Catholic community towards government measures to tackle disadvantage and inequality. Equally significant is the protestant response to many of these measures which is often one of ambivalence rather than derision. In so far as the data appear to challenge the‘zero-sum'game that traditionally underpins relations between the two communities in Northern Ireland, they provide some grounds for optimism. Yet such optimism is tempered somewhat by the seeds of discontent which are manifest within the protestant community, particularly around issues of equality in employment and cultural traditions. Despite the more positive assessment of community relations and equality in 2002, it is argued that further monitoring will be required to determine the long-term effects of policy reform on relationships between the two communit
Resumo:
We show that a dense spectrum of chaotic multiply excited eigenstates can play a major role in collision processes involving many-electron multicharged ions. A statistical theory based on chaotic properties of the eigenstates enables one to obtain relevant energy-averaged cross sections in terms of sums over single-electron orbitals. Our calculation of low-energy electron recombination of Au25+ shows that the resonant process is 200 times more intense than direct radiative recombination, which explains the recent experimental results of Hoffknecht [J. Phys. B 31, 2415 (1998)].
Resumo:
A many-body theory approach is developed for the problem of positron-atom scattering and annihilation. Strong electron- positron correlations are included nonperturbatively through the calculation of the electron-positron vertex function. It corresponds to the sum of an infinite series of ladder diagrams, and describes the physical effect of virtual positronium formation. The vertex function is used to calculate the positron-atom correlation potential and nonlocal corrections to the electron-positron annihilation vertex. Numerically, we make use of B-spline basis sets, which ensures rapid convergence of the sums over intermediate states. We have also devised an extrapolation procedure that allows one to achieve convergence with respect to the number of intermediate- state orbital angular momenta included in the calculations. As a test, the present formalism is applied to positron scattering and annihilation on hydrogen, where it is exact. Our results agree with those of accurate variational calculations. We also examine in detail the properties of the large correlation corrections to the annihilation vertex.
Resumo:
In 1878, one of Britain's largest banks, the City of Glasgow Bank, collapsed, leaving a huge deficit between its assets and liabilities. As this bank, similar to many other contemporary British banks, had unlimited liability, its failure was accompanied by the bankruptcy of the vast majority of its stockholders. It is generally believed that the collapse of this depository institution revealed the extent to which ownership in large joint-stock banks had been diffused to investors of very modest means. It is also believed that the failure resulted in bank shareholders dumping their shares unto the market. Our evidence, garnered from ownership records, trading data, and stock prices, offers no support for these widely held beliefs. © 2007 Elsevier Inc. All rights reserved.