969 resultados para Local size
Resumo:
Lifecycle funds offered by retirement plan providers allocate aggressively to risky asset classes when the employee participants are young, gradually switching to more conservative asset classes as they grow older and approach retirement. This approach focuses on maximizing growth of the accumulation fund in the initial years and preserving its value in the later years. The authors simulate terminal wealth outcomes based on conventional lifecycle asset allocation rules as well as on contrarian strategies that reverse the direction of asset switching. The evidence suggests that the growth in portfolio size over time significantly impacts the asset allocation decision. Due to the portfolio size effect that is observed by the authors, the terminal value of accumulation in retirement accounts is influenced more by the asset allocation strategy adopted in later years relative to that adopted in early years. By mechanistically switching to conservative assets in the later years of a plan, lifecycle strategies sacrifice significant growth opportunity and prove counterproductive to the participant's wealth accumulation objective. The authors' conclude that this sacrifice does not seem to be compensated adequately in terms of reducing the risk of potentially adverse outcomes.
Resumo:
The aim of the study is to identify the opportunities and challenges a local government public asset manager is most likely to deal with when adopting the appropriate Public Asset Management Framework especially in developing countries. In order to achieve its aim, this study employs a Case Study in Indonesia for collecting all data i.e. interviews, document analysis and observations at South Sulawesi Province, Indonesia. The study concludes that there are significant opportunities and challenges that local governments in developing countries, especially Indonesia, might be required to manage if apply public asset management framework appropriately. The opportunities are more effective and efficient local government, accountable and auditable local government organization, increase local government portfolio, reflect up to date information for decision makers in local government, and improve the quality of public services. On the other hand, there are also challenges. Those challenges are local governments has no clear legal and institutional framework to support the asset management application, non-profit principle of public assets, cross jurisdictions and applications in public asset management, the complexity of public organization objectives, and data availability required for managing public property. The study only covers the condition of developing countries where Indonesia as an example, which could not represent exactly the whole local governments’ condition in the world. Further study to develop an asset management system applicable for all local governments in developing countries is urgently needed. Findings from this study will provide useful input for the policy maker, scholars and asset management practitioners to develop an asset management framework for more efficient and effective local governments.
Resumo:
The purpose of this paper is to emphasis the significance of public asset management in Indonesia that is by identifying opportunities and challenges of Indonesian local governments in adopting current practice of Public Asset Management System. A Case Study, in South Sulawesi Provincial government was used as the approach to achieve the research objective. The case study involved two data collection techniques i.e. interviews followed by study on documents. The result of the study indicates there are some significant opportunities and challenges that Indonesian local government might deal with in adopting current practice of public asset management. There are opportunities that can lead to more effective and efficient local government, accountable and auditable local government organization, increase local government portfolio, and improve the quality of public services. The challenges include no clear institutional and legal framework to support the asset management application, non-profit principle of public assets, cross jurisdictions in public asset management, complexity of local government objectives, and unavailability of data for managing public property. The study only covers condition of South Sulawesi Province, which could not represent exactly the whole local governments’ condition in Indonesia. Findings from this study provide useful input for the policy makers, scholars and asset management practitioners in Indonesia to establish a public asset management framework that suitable for Indonesia.
Resumo:
Asset management in local government is an emerging discipline and over a decade has become a crucial aspect towards a more efficient and effective organisation. One crucial feature in the public asset management is performance measurement toward the public real estates. This measurement critically at the important component of public wealth and seeks to apply a standard of economic efficiency and effective organisational management especially in such global financial crisis condition. This paper aims to identify global economic crisis effect and proposes alternative solution for local governments to softening the impact of the crisis to the local governments organisation. This study found that the most suitable solution for local government to solve the global economic crisis in Indonesia is application of performance measurement in its asset management. Thus, it is important to develop performance measurement system in local government asset management process. This study provides suggestions from published documents and literatures. The paper also discusses the elements of public real estate performance measurement. The measurement of performance has become an essential component of the strategic thinking of assets owners and managers. Without having a formal measurement system for performance, it is difficult to plan, control and improve local government real estate management system. A close look at best practices in public sectors reveals that in most cases these practices were transferred from private sector reals estate management under the direction of real estate experts retained by government. One of the most significant advances in government property performance measurement resulted from recognition that the methodology used by private sector, non real estate corporations for managing their real property offered a valuable prototype for local governments. In general, there are two approaches most frequently used to measure performance of public organisations. Those are subjective and objective measures. Finally, findings from this study provides useful input for the local government policy makers, scholars and asset management practitioners to establish a public real estate performance measurement system toward more efficient and effective local governments in managing their assets as well as increasing public services quality in order to soften the impact of global financial crisis.
Resumo:
RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.
Resumo:
In this thesis, the issue of incorporating uncertainty for environmental modelling informed by imagery is explored by considering uncertainty in deterministic modelling, measurement uncertainty and uncertainty in image composition. Incorporating uncertainty in deterministic modelling is extended for use with imagery using the Bayesian melding approach. In the application presented, slope steepness is shown to be the main contributor to total uncertainty in the Revised Universal Soil Loss Equation. A spatial sampling procedure is also proposed to assist in implementing Bayesian melding given the increased data size with models informed by imagery. Measurement error models are another approach to incorporating uncertainty when data is informed by imagery. These models for measurement uncertainty, considered in a Bayesian conditional independence framework, are applied to ecological data generated from imagery. The models are shown to be appropriate and useful in certain situations. Measurement uncertainty is also considered in the context of change detection when two images are not co-registered. An approach for detecting change in two successive images is proposed that is not affected by registration. The procedure uses the Kolmogorov-Smirnov test on homogeneous segments of an image to detect change, with the homogeneous segments determined using a Bayesian mixture model of pixel values. Using the mixture model to segment an image also allows for uncertainty in the composition of an image. This thesis concludes by comparing several different Bayesian image segmentation approaches that allow for uncertainty regarding the allocation of pixels to different ground components. Each segmentation approach is applied to a data set of chlorophyll values and shown to have different benefits and drawbacks depending on the aims of the analysis.
Resumo:
Over recent years, Unmanned Air Vehicles or UAVs have become a powerful tool for reconnaissance and surveillance tasks. These vehicles are now available in a broad size and capability range and are intended to fly in regions where the presence of onboard human pilots is either too risky or unnecessary. This paper describes the formulation and application of a design framework that supports the complex task of multidisciplinary design optimisation of UAVs systems via evolutionary computation. The framework includes a Graphical User Interface (GUI), a robust Evolutionary Algorithm optimiser named HAPEA, several design modules, mesh generators and post-processing capabilities in an integrated platform. These population –based algorithms such as EAs are good for cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise, and also problems where we look for multiple solutions via Game Theory, namely a Nash equilibrium point or a Pareto set of non-dominated solutions. The application of the methodology is illustrated on conceptual and detailed multi-criteria and multidisciplinary shape design problems. Results indicate the practicality and robustness of the framework to find optimal shapes and trade—offs between the disciplinary analyses and to produce a set of non dominated solutions of an optimal Pareto front to the designer.
Resumo:
The large deformation analysis is one of major challenges in numerical modelling and simulation of metal forming. Because no mesh is used, the meshfree methods show good potential for the large deformation analysis. In this paper, a local meshfree formulation, based on the local weak-forms and the updated Lagrangian (UL) approach, is developed for the large deformation analysis. To fully employ the advantages of meshfree methods, a simple and effective adaptive technique is proposed, and this procedure is much easier than the re-meshing in FEM. Numerical examples of large deformation analysis are presented to demonstrate the effectiveness of the newly developed nonlinear meshfree approach. It has been found that the developed meshfree technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming.
Resumo:
Obese children move less and with greater difficulty than normal-weight counterparts but expend comparable energy. Increased metabolic costs have been attributed to poor biomechanics but few studies have investigated the influence of obesity on mechanical demands of gait. This study sought to assess three-dimensional lower extremity joint powers in two walking cadences in 28 obese and normal-weight children. 3D-motion analysis was conducted for five trials of barefoot walking at self-selected and 30% greater than self-selected cadences. Mechanical power was calculated at the hip, knee, and ankle in sagittal, frontal and transverse planes. Significant group differences were seen for all power phases in the sagittal plane, hip and knee power at weight acceptance and hip power at propulsion in the frontal plane, and knee power during mid-stance in the transverse plane. After adjusting for body weight, group differences existed in hip and knee power phases at weight acceptance in sagittal and frontal planes, respectively. Differences in cadence existed for all hip joint powers in the sagittal plane and frontal plane hip power at propulsion. Frontal plane knee power at weight acceptance and sagittal plane knee power at propulsion were significantly different between cadences. Larger joint powers in obese children contribute to difficulty performing locomotor tasks, potentially decreasing motivation to exercise.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.