963 resultados para Optimal level
Resumo:
The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
41 p.
Resumo:
36 p.
Resumo:
We demonstrate a new method for extracting high-level scene information from the type of data available from simultaneous localisation and mapping systems. We model the scene with a collection of primitives (such as bounded planes), and make explicit use of both visible and occluded points in order to refine the model. Since our formulation allows for different kinds of primitives and an arbitrary number of each, we use Bayesian model evidence to compare very different models on an even footing. Additionally, by making use of Bayesian techniques we can also avoid explicitly finding the optimal assignment of map landmarks to primitives. The results show that explicit reasoning about occlusion improves model accuracy and yields models which are suitable for aiding data association. © 2011. The copyright of this document resides with its authors.
Resumo:
Penaeidins, members of a new family of antimicrobial peptides constitutively produced and stored in the haemocytes of penaeid shrimp, display antimicrobial activity against bacteria, and fungi. Here, a DNA sequence encoding the mature Ch-penaeidin peptide was cloned into the pPIC9K vector and transformed into Pichia pastoris. The transformed cells were screened for multi-copy plasmids using increasing concentrations of G418. Positive colonies carrying chromosomal integrations of the Chp gene were identified by phenotype and PCR. When transformed cells were induced with methanol, SDS-PAGE and Western blotting revealed the production of a similar to6100 Da recombinant CHP (rCHP) expression product. Large scale expression revealed that rCHP was produced at 108 mg/L under optimal conditions in the highest Chp-producing P. pastoris clone. The antimicrobial activities of rCHP were studied by liquid phase analysis, which revealed that rCHP exhibited activities against some Gram-negative and Gram-positive bacteria, but had a relatively low activity against some fungi. Purification of rCHP by cation exchange chromatography and subsequent automated amino acid sequencing revealed the presence of four additional amino acids (YVEF) at the N-terminus that belonged to the cleaved fusion signal peptide; these residues may account for the observed decrease in antifungal activity. Together, these observations indicate that rCHP is an effective antimicrobial peptide that can be successfully produced at high levels in the yeast, and therefore may be a potential antimicrobial candidate for practical use. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Grazing by domestic herbivores is generally recognized as a major ecological factor and an important evolutionary force in grasslands. Grazing has both extensive and profound effects on individual plants and communities. We investigated the response patterns of Polygonum viviparum species and the species diversity of an alpine shrub meadow in response to long-term livestock grazing by a field manipulative experiment controlling livestock numbers on the Qinghai-Tibet Plateau in China. Here, we hypothesize that within a range of grazing pressure, grazing can alter relative allocation to different plant parts without changing total biomass for some plant species if there is life history trade-offs between plant traits. The same type of communities exposed to different grazing pressures may only alter relative species' abundances or species composition and not vary species diversity because plant species differ in resistant capability to herbivory. The results show that plant height and biomass of different organs differed among grazing treatments but total biomass remained constant. Biomass allocation and absolute investments to both reproduction and growth decreased and to belowground storage increased with increased grazing pressure, indicating the increasing in storage function was attained at a cost of reducing reproduction of bulbils and represented an optimal allocation and an adaptive response of the species to long-term aboveground damage. Moreover, our results showed multiform response types for either species groups or single species along the gradient of grazing intensity. Heavy grazing caused a 13.2% increase in species richness. There was difference in species composition of about 18%-20% among grazing treatment. Shannon-Wiener (H') diversity index and species evenness (E) index did not differ among grazing treatments. These results support our hypothesis.
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
This study explores the experiences of stress and burnout in Irish second level teachers and examines the contribution of a number of individual, environmental and health factors in burnout development. As no such study has previously been carried out with this sample, a mixed-methods approach was adopted in order to comprehensively investigate the subject matter. Teaching has consistently been identified as a particularly stressful occupation and research investigating its development is of great importance in developing measures to address the problem. The first phase of study involved the use of focus groups conducted with a total of 20 second-level teachers from 11 different schools in the greater Cork city area. Findings suggest that teachers experience a variety of stressors – in class, in the staff room and outside of school. The second phase of study employed a survey to examine the factors associated with burnout. Analysis of 192 responses suggested that burnout results from a combination of demographic, personality, environmental and coping factors. Burnout was also found to be associated with a number of physical symptoms, particularly trouble sleeping and fatigue. Findings suggest that interventions designed to reduce burnout must reflect the complexity of the problem and its development. Based on the research findings, interventions that combine individual and organisational approaches should provide the optimal chance of effectively tackling burnout.
Resumo:
Many food production methods are both economically and environmentally unsustainable. Our project investigated aquaponics, an alternative method of agriculture that could address these issues. Aquaponics combines fish and plant crop production in a symbiotic, closed-loop system. We aimed to reduce the initial and operating costs of current aquaponic systems by utilizing alternative feeds. These improvements may allow for sustainable implementation of the system in rural or developing regions. We conducted a multi-phase process to determine the most affordable and effective feed alternatives for use in an aquaponic system. At the end of two preliminary phases, soybean meal was identified as the most effective potential feed supplement. In our final phase, we constructed and tested six full-scale aquaponic systems of our own design. Data showed that soybean meal can be used to reduce operating costs and reliance on fishmeal. However, a more targeted investigation is needed to identify the optimal formulation of alternative feed blends.
Resumo:
A computational model for the interrelated phenomena in the process of vacuum arc remelting is analyzed and adjusted of optimal accuracy and computation time. The decision steps in this case study are offered as an example how the coupling in models of similar processes can be addressed. Results show dominance of the electromagnetic forces over buoyancy and inertia for the investigated process conditions.
Resumo:
We examine the dynamic optimization problem for not-for-profit financial institutions (NFPs) that maximize consumer surplus, not profits. We characterize the optimal dynamic policy and find that it involves credit rationing. Interest rates set by mature NFPs will typically be more favorable to customers than market rates, as any surplus is distributed in the form of interest rate subsidies, with credit rationing being required to prevent these subsidies from distorting loan volumes from their optimal levels. Rationing overcomes a fundamental problem in NFPs; it allows them to distribute the surplus without distorting the volume of activity from the efficient level.
Resumo:
Institutional and economic development has recently returned to the forefront of economic analysis. The use of case studies (both historical and contemporary) has been important in this revival. Likewise, it has been argued recently by economic methodologists that historical context provides a kind of ‘‘laboratory’’ for the researcher interested in real world economic phenomena. Counterterrorism economics, in contrast with much of the rest of the literature on terrorism, has all too rarely drawn upon detailed contextual case studies. This article seeks to help remedy this problem. Archival evidence, including previously unpublished material on the DeLorean case, is an important feature of this article. The article examines how an inter-related strategy, which traded-off economic, security, and political considerations, operated during the Troubles. Economic repercussions of this strategy are discussed. An economic analysis of technical and organizational change within paramilitarism is also presented. A number of institutional lessons are discussed including: the optimal balance between carrot versus stick, centralization relative to decentralization, the economics of intelligence operations, and tit-for-tat violence. While existing economic models are arguably correct in identifying benefits from politico-economic decentralization, they downplay the element highlighted by institutional analysis.
Resumo:
A novel approach for the multi-objective design optimisation of aerofoil profiles is presented. The proposed method aims to exploit the relative strengths of global and local optimisation algorithms, whilst using surrogate models to limit the number of computationally expensive CFD simulations required. The local search stage utilises a re-parameterisation scheme that increases the flexibility of the geometry description by iteratively increasing the number of design variables, enabling superior designs to be generated with minimal user intervention. Capability of the algorithm is demonstrated via the conceptual design of aerofoil sections for use on a lightweight laminar flow business jet. The design case is formulated to account for take-off performance while reducing sensitivity to leading edge contamination. The algorithm successfully manipulates boundary layer transition location to provide a potential set of aerofoils that represent the trade-offs between drag at cruise and climb conditions in the presence of a challenging constraint set. Variations in the underlying flow physics between Pareto-optimal aerofoils are examined to aid understanding of the mechanisms that drive the trade-offs in objective functions.
Resumo:
BACKGROUND: The optimal ways of using aromatase inhibitors or tamoxifen as endocrine treatment for early breast cancer remains uncertain.
METHODS: We undertook meta-analyses of individual data on 31 920 postmenopausal women with oestrogen-receptor-positive early breast cancer in the randomised trials of 5 years of aromatase inhibitor versus 5 years of tamoxifen; of 5 years of aromatase inhibitor versus 2-3 years of tamoxifen then aromatase inhibitor to year 5; and of 2-3 years of tamoxifen then aromatase inhibitor to year 5 versus 5 years of tamoxifen. Primary outcomes were any recurrence of breast cancer, breast cancer mortality, death without recurrence, and all-cause mortality. Intention-to-treat log-rank analyses, stratified by age, nodal status, and trial, yielded aromatase inhibitor versus tamoxifen first-event rate ratios (RRs).
FINDINGS: In the comparison of 5 years of aromatase inhibitor versus 5 years of tamoxifen, recurrence RRs favoured aromatase inhibitors significantly during years 0-1 (RR 0·64, 95% CI 0·52-0·78) and 2-4 (RR 0·80, 0·68-0·93), and non-significantly thereafter. 10-year breast cancer mortality was lower with aromatase inhibitors than tamoxifen (12·1% vs 14·2%; RR 0·85, 0·75-0·96; 2p=0·009). In the comparison of 5 years of aromatase inhibitor versus 2-3 years of tamoxifen then aromatase inhibitor to year 5, recurrence RRs favoured aromatase inhibitors significantly during years 0-1 (RR 0·74, 0·62-0·89) but not while both groups received aromatase inhibitors during years 2-4, or thereafter; overall in these trials, there were fewer recurrences with 5 years of aromatase inhibitors than with tamoxifen then aromatase inhibitors (RR 0·90, 0·81-0·99; 2p=0·045), though the breast cancer mortality reduction was not significant (RR 0·89, 0·78-1·03; 2p=0·11). In the comparison of 2-3 years of tamoxifen then aromatase inhibitor to year 5 versus 5 years of tamoxifen, recurrence RRs favoured aromatase inhibitors significantly during years 2-4 (RR 0·56, 0·46-0·67) but not subsequently, and 10-year breast cancer mortality was lower with switching to aromatase inhibitors than with remaining on tamoxifen (8·7% vs 10·1%; 2p=0·015). Aggregating all three types of comparison, recurrence RRs favoured aromatase inhibitors during periods when treatments differed (RR 0·70, 0·64-0·77), but not significantly thereafter (RR 0·93, 0·86-1·01; 2p=0·08). Breast cancer mortality was reduced both while treatments differed (RR 0·79, 0·67-0·92), and subsequently (RR 0·89, 0·81-0·99), and for all periods combined (RR 0·86, 0·80-0·94; 2p=0·0005). All-cause mortality was also reduced (RR 0·88, 0·82-0·94; 2p=0·0003). RRs differed little by age, body-mass index, stage, grade, progesterone receptor status, or HER2 status. There were fewer endometrial cancers with aromatase inhibitors than tamoxifen (10-year incidence 0·4% vs 1·2%; RR 0·33, 0·21-0·51) but more bone fractures (5-year risk 8·2% vs 5·5%; RR 1·42, 1·28-1·57); non-breast-cancer mortality was similar.
INTERPRETATION: Aromatase inhibitors reduce recurrence rates by about 30% (proportionately) compared with tamoxifen while treatments differ, but not thereafter. 5 years of an aromatase inhibitor reduces 10-year breast cancer mortality rates by about 15% compared with 5 years of tamoxifen, hence by about 40% (proportionately) compared with no endocrine treatment.
FUNDING: Cancer Research UK, Medical Research Council.