910 resultados para Almost Optimal Density Function
Resumo:
O regime eólico de uma região pode ser descrito por distribuição de frequências que fornecem informações e características extremamente necessárias para uma possível implantação de sistemas eólicos de captação de energia na região e consequentes aplicações no meio rural em regiões afastadas. Estas características, tais como a velocidade média anual, a variância das velocidades registradas e a densidade da potência eólica média horária, podem ser obtidas pela frequência de ocorrências de determinada velocidade, que por sua vez deve ser estudada através de expressões analíticas. A função analítica mais adequada para distribuições eólicas é a função de densidade de Weibull, que pode ser determinada por métodos numéricos e regressões lineares. O objetivo deste trabalho é caracterizar analítica e geometricamente todos os procedimentos metodológicos necessários para a realização de uma caracterização completa do regime eólico de uma região e suas aplicações na região de Botucatu - SP, visando a determinar o potencial energético para implementação de turbinas eólicas. Assim, foi possível estabelecer teoremas relacionados com a forma de caracterização do regime eólico, estabelecendo a metodologia concisa analiticamente para a definição dos parâmetros eólicos de qualquer região a ser estudada. Para o desenvolvimento desta pesquisa, utilizou-se um anemômetro da CAMPBELL.
Resumo:
There are diferent applications in Engineering that require to compute improper integrals of the first kind (integrals defined on an unbounded domain) such as: the work required to move an object from the surface of the earth to in nity (Kynetic Energy), the electric potential created by a charged sphere, the probability density function or the cumulative distribution function in Probability Theory, the values of the Gamma Functions(wich useful to compute the Beta Function used to compute trigonometrical integrals), Laplace and Fourier Transforms (very useful, for example in Differential Equations).
Resumo:
This paper shows that the proposed Rician shadowed model for multi-antenna communications allows for the unification of a wide set of models, both for multiple-input multiple-output (MIMO) and single- input single-output (SISO) communications. The MIMO Rayleigh and MIMO Rician can be deduced from the MIMO Rician shadowed, and so their SISO counterparts. Other more general SISO models, besides the Rician shadowed, are included in the model, such as the κ-μ, and its recent generalization, the κ-μ shadowed model. Moreover, the SISO η-μ and Nakagami-q models are also included in the MIMO Rician shadowed model. The literature already presents the probability density function (pdf) of the Rician shadowed Gram channel matrix in terms of the well-known gamma- Wishart distribution. We here derive its moment generating function in a tractable form. Closed- form expressions for the cumulative distribution function and the pdf of the maximum eigenvalue are also carried out.
Resumo:
Purpose This thesis is about liveability, place and ageing in the high density urban landscape of Brisbane, Australia. As with other major developed cities around the globe, Brisbane has adopted policies to increase urban residential densities to meet the main liveability and sustainability aim of decreasing car dependence and therefore pollution, as well as to minimise the loss of greenfield areas and habitats to developers. This objective hinges on urban neighbourhoods/communities being liveable places, which residents do not have to leave for everyday living. Community/neighbourhood liveability is an essential ingredient in healthy ageing in place and has a substantial impact upon the safety, independence and well-being of older adults. It is generally accepted that ageing in place is optimal for both older people and the state. The optimality of ageing in place generally assumes that there is a particular quality to environments or standard of liveability in which people successfully age in place. The aim of this thesis was to examine if there are particular environmental qualities or aspects of liveability that test optimality and to better understand the key liveability factors that contribute to successful ageing in place. Method A strength of this thesis is that it draws on two separate studies to address the research question of what makes high density liveable for older people. In Chapter 3, the two methods are identified and differentiated as Method 1 (used in Paper 1) and Method 2 (used in Papers 2, 3, 4 and 5). Method 1 involved qualitative interviews with 24 inner city high density Brisbane residents. The major strength of this thesis is the innovative methodology outlined in the thesis as Method 2. Method 2 involved a case study approach employing qualitative and quantitative methods. Qualitative data was collected using semi-structured, in-depth interviews and time-use diaries completed by participants during the week of tracking. The quantitative data was gathered using Global Positioning Systems for tracking and Geographical Information Systems for mapping and analysis of participants’ activities. The combination of quantitative and qualitative analysis captured both participants’ subjective perceptions of their neighbourhoods and their patterns of movement. This enhanced understanding of how neighbourhoods and communities function and of the various liveability dimensions that contribute to active ageing and ageing in place for older people living in high density environments. Both studies’ participants were inner-city high density residents of Brisbane. The study based on Method 1 drew on a wider age demographic than the study based on Method 2. Findings The five papers presented in this thesis by publication indicate a complex inter-relationship of the factors that make a place liveable. The first three papers identify what is comparable and different between the physical and social factors of high density communities/neighbourhoods. The last two papers explore relationships between social engagement and broader community variables such as infrastructure and the physical built environments that are risk or protective factors relevant to community liveability, active ageing and ageing in place in high density. The research highlights the importance of creating and/or maintaining a barrier-free environment and liveable community for ageing adults. Together, the papers promote liveability, social engagement and active ageing in high density neighbourhoods by identifying factors that constitute liveability and strategies that foster active ageing and ageing in place, social connections and well-being. Recommendations There is a strong need to offer more support for active ageing and ageing in place. While the data analyses of this research provide insight into the lived experience of high density residents, further research is warranted. Further qualitative and quantitative research is needed to explore in more depth, the urban experience and opinions of older people living in urban environments. In particular, more empirical research and theory-building is needed in order to expand understanding of the particular environmental qualities that enable successful ageing in place in our cities and to guide efforts aimed at meeting this objective. The results suggest that encouraging the presence of more inner city retail outlets, particularly services that are utilised frequently in people’s daily lives such as supermarkets, medical services and pharmacies, would potentially help ensure residents fully engage in their local community. The connectivity of streets, footpaths and their role in facilitating the reaching of destinations are well understood as an important dimension of liveability. To encourage uptake of sustainable transport, the built environment must provide easy, accessible connections between buildings, walkways, cycle paths and public transport nodes. Wider streets, given that they take more time to cross than narrow streets, tend to .compromise safety - especially for older people. Similarly, the width of footpaths, the level of buffering, the presence of trees, lighting, seating and design of and distance between pedestrian crossings significantly affects the pedestrian experience for older people and impacts upon their choice of transportation. High density neighbourhoods also require greater levels of street fixtures and furniture for everyday life to make places more useable and comfortable for regular use. The importance of making the public realm useful and habitable for older people cannot be over-emphasised. Originality/value While older people are attracted to high density settings, there has been little empirical evidence linking liveability satisfaction with older people’s use of urban neighbourhoods. The current study examined the relationships between community/neighbourhood liveability, place and ageing to better understand the implications for those adults who age in place. The five papers presented in this thesis add to the understanding of what high density liveable age-friendly communities/ neighbourhoods are and what makes them so for older Australians. Neighbourhood liveability for older people is about being able to age in place and remain active. Issues of ageing in Australia and other areas of the developed world will become more critical in the coming decades. Creating livable communities for all ages calls for partnerships across all levels of government agencies and among different sectors within communities. The increasing percentage of older people in the community will have increasing political influence and it will be a foolish government who ignores the needs of an older society.
Resumo:
Purpose: To determine whether there is a difference in neuroretinal function and in macular pigment optical density between persons with high- and low-risk gene variants for age-related macular degeneration (AMD) and no ophthalmoscopic signs of AMD, and to compare the results on neuroretinal function to patients with manifest early AMD. Methods and Participants: Neuroretinal function was assessed with the multifocal electroretinogram (mfERG) for 32 participants (22 healthy persons with no AMD and 10 early AMD patients). The 22 healthy participants with no AMD had high- or low-risk genotypes for either CFH (rs380390) and/or ARMS2 (rs10490924). Trough-to-peak response densities and peak-implicit times were analyzed in 5 concentric rings. Macular pigment optical densitometry was assessed by customized heterochromatic flicker photometry. Results: Trough-to-peak response densities for concentric rings 1 to 3 were, on average, significantly greater in participants with high-risk genotypes than in participants with low-risk genotypes and in persons with early AMD after correction for age and smoking (p<0.05). The group peak- implicit times for ring 1 were, on average, delayed in the patients with early AMD compared with the participants with high- or low-risk genotypes, although these differences were not significant. There was no significant correlation between genotypes and macular pigment optical density. Conclusion: Increased neuroretinal activity in persons who carry high-risk AMD genotypes may be due to genetically determined subclinical inflammatory and/or histological changes in the retina. Neuroretinal function in healthy persons genetically susceptible to AMD may be a useful additional early biomarker (in combination with genetics) before there is clinical manifestation.
Resumo:
Background: Molecular marker technologies are undergoing a transition from largely serial assays measuring DNA fragment sizes to hybridization-based technologies with high multiplexing levels. Diversity Arrays Technology (DArT) is a hybridization-based technology that is increasingly being adopted by barley researchers. There is a need to integrate the information generated by DArT with previous data produced with gel-based marker technologies. The goal of this study was to build a high-density consensus linkage map from the combined datasets of ten populations, most of which were simultaneously typed with DArT and Simple Sequence Repeat (SSR), Restriction Enzyme Fragment Polymorphism (RFLP) and/or Sequence Tagged Site (STS) markers. Results: The consensus map, built using a combination of JoinMap 3.0 software and several purpose-built perl scripts, comprised 2,935 loci (2,085 DArT, 850 other loci) and spanned 1,161 cM. It contained a total of 1,629 'bins' (unique loci), with an average inter-bin distance of 0.7 ± 1.0 cM (median = 0.3 cM). More than 98% of the map could be covered with a single DArT assay. The arrangement of loci was very similar to, and almost as optimal as, the arrangement of loci in component maps built for individual populations. The locus order of a synthetic map derived from merging the component maps without considering the segregation data was only slightly inferior. The distribution of loci along chromosomes indicated centromeric suppression of recombination in all chromosomes except 5H. DArT markers appeared to have a moderate tendency toward hypomethylated, gene-rich regions in distal chromosome areas. On the average, 14 ± 9 DArT loci were identified within 5 cM on either side of SSR, RFLP or STS loci previously identified as linked to agricultural traits. Conclusion: Our barley consensus map provides a framework for transferring genetic information between different marker systems and for deploying DArT markers in molecular breeding schemes. The study also highlights the need for improved software for building consensus maps from high-density segregation data of multiple populations.
Resumo:
We consider a multicommodity flow problem on a complete graph whose edges have random, independent, and identically distributed capacities. We show that, as the number of nodes tends to infinity, the maximumutility, given by the average of a concave function of each commodity How, has an almost-sure limit. Furthermore, the asymptotically optimal flow uses only direct and two-hop paths, and can be obtained in a distributed manner.
Resumo:
We develop in this article the first actor-critic reinforcement learning algorithm with function approximation for a problem of control under multiple inequality constraints. We consider the infinite horizon discounted cost framework in which both the objective and the constraint functions are suitable expected policy-dependent discounted sums of certain sample path functions. We apply the Lagrange multiplier method to handle the inequality constraints. Our algorithm makes use of multi-timescale stochastic approximation and incorporates a temporal difference (TD) critic and an actor that makes a gradient search in the space of policy parameters using efficient simultaneous perturbation stochastic approximation (SPSA) gradient estimates. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal policy. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider a wireless sensor network whose main function is to detect certain infrequent alarm events, and to forward alarm packets to a base station, using geographical forwarding. The nodes know their locations, and they sleep-wake cycle, waking up periodically but not synchronously. In this situation, when a node has a packet to forward to the sink, there is a trade-off between how long this node waits for a suitable neighbor to wake up and the progress the packet makes towards the sink once it is forwarded to this neighbor. Hence, in choosing a relay node, we consider the problem of minimizing average delay subject to a constraint on the average progress. By constraint relaxation, we formulate this next hop relay selection problem as a Markov decision process (MDP). The exact optimal solution (BF (Best Forward)) can be found, but is computationally intensive. Next, we consider a mathematically simplified model for which the optimal policy (SF (Simplified Forward)) turns out to be a simple one-step-look-ahead rule. Simulations show that SF is very close in performance to BF, even for reasonably small node density. We then study the end-to-end performance of SF in comparison with two extremal policies: Max Forward (MF) and First Forward (FF), and an end-to-end delay minimising policy proposed by Kim et al. 1]. We find that, with appropriate choice of one hop average progress constraint, SF can be tuned to provide a favorable trade-off between end-to-end packet delay and the number of hops in the forwarding path.
Resumo:
Beavers are often found to be in conflict with human interests by creating nuisances like building dams on flowing water (leading to flooding), blocking irrigation canals, cutting down timbers, etc. At the same time they contribute to raising water tables, increased vegetation, etc. Consequently, maintaining an optimal beaver population is beneficial. Because of their diffusion externality (due to migratory nature), strategies based on lumped parameter models are often ineffective. Using a distributed parameter model for beaver population that accounts for their spatial and temporal behavior, an optimal control (trapping) strategy is presented in this paper that leads to a desired distribution of the animal density in a region in the long run. The optimal control solution presented, imbeds the solution for a large number of initial conditions (i.e., it has a feedback form), which is otherwise nontrivial to obtain. The solution obtained can be used in real-time by a nonexpert in control theory since it involves only using the neural networks trained offline. Proper orthogonal decomposition-based basis function design followed by their use in a Galerkin projection has been incorporated in the solution process as a model reduction technique. Optimal solutions are obtained through a "single network adaptive critic" (SNAC) neural-network architecture.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
We develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Resumo:
1. The relationship between species richness and ecosystem function, as measured by productivity or biomass, is of long-standing theoretical and practical interest in ecology. This is especially true for forests, which represent a majority of global biomass, productivity and biodiversity. 2. Here, we conduct an analysis of relationships between tree species richness, biomass and productivity in 25 forest plots of area 8-50ha from across the world. The data were collected using standardized protocols, obviating the need to correct for methodological differences that plague many studies on this topic. 3. We found that at very small spatial grains (0.04ha) species richness was generally positively related to productivity and biomass within plots, with a doubling of species richness corresponding to an average 48% increase in productivity and 53% increase in biomass. At larger spatial grains (0.25ha, 1ha), results were mixed, with negative relationships becoming more common. The results were qualitatively similar but much weaker when we controlled for stem density: at the 0.04ha spatial grain, a doubling of species richness corresponded to a 5% increase in productivity and 7% increase in biomass. Productivity and biomass were themselves almost always positively related at all spatial grains. 4. Synthesis. This is the first cross-site study of the effect of tree species richness on forest biomass and productivity that systematically varies spatial grain within a controlled methodology. The scale-dependent results are consistent with theoretical models in which sampling effects and niche complementarity dominate at small scales, while environmental gradients drive patterns at large scales. Our study shows that the relationship of tree species richness with biomass and productivity changes qualitatively when moving from scales typical of forest surveys (0.04ha) to slightly larger scales (0.25 and 1ha). This needs to be recognized in forest conservation policy and management.
Resumo:
The optimal bounded control of quasi-integrable Hamiltonian systems with wide-band random excitation for minimizing their first-passage failure is investigated. First, a stochastic averaging method for multi-degrees-of-freedom (MDOF) strongly nonlinear quasi-integrable Hamiltonian systems with wide-band stationary random excitations using generalized harmonic functions is proposed. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximizinig reliability and maximizing mean first-passage time are formulated based on the averaged It$\ddot{\rm o}$ equations by applying the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraints. The relationship between the dynamical programming equations and the backward Kolmogorov equation for the conditional reliability function and the Pontryagin equation for the conditional mean first-passage time of optimally controlled system is discussed. Finally, the conditional reliability function, the conditional probability density and mean of first-passage time of an optimally controlled system are obtained by solving the backward Kolmogorov equation and Pontryagin equation. The application of the proposed procedure and effectiveness of control strategy are illustrated with an example.