971 resultados para AGE ESTIMATION
Resumo:
This paper presents the site classification of Bangalore Mahanagar Palike (BMP) area using geophysical data and the evaluation of spectral acceleration at ground level using probabilistic approach. Site classification has been carried out using experimental data from the shallow geophysical method of Multichannel Analysis of Surface wave (MASW). One-dimensional (1-D) MASW survey has been carried out at 58 locations and respective velocity profiles are obtained. The average shear wave velocity for 30 m depth (Vs(30)) has been calculated and is used for the site classification of the BMP area as per NEHRP (National Earthquake Hazards Reduction Program). Based on the Vs(30) values major part of the BMP area can be classified as ``site class D'', and ``site class C'. A smaller portion of the study area, in and around Lalbagh Park, is classified as ``site class B''. Further, probabilistic seismic hazard analysis has been carried out to map the seismic hazard in terms spectral acceleration (S-a) at rock and the ground level considering the site classes and six seismogenic sources identified. The mean annual rate of exceedance and cumulative probability hazard curve for S. have been generated. The quantified hazard values in terms of spectral acceleration for short period and long period are mapped for rock, site class C and D with 10% probability of exceedance in 50 years on a grid size of 0.5 km. In addition to this, the Uniform Hazard Response Spectrum (UHRS) at surface level has been developed for the 5% damping and 10% probability of exceedance in 50 years for rock, site class C and D These spectral acceleration and uniform hazard spectrums can be used to assess the design force for important structures and also to develop the design spectrum.
Resumo:
In this paper, we present an approach to estimate fractal complexity of discrete time signal waveforms based on computation of area bounded by sample points of the signal at different time resolutions. The slope of best straight line fit to the graph of log(A(rk)A / rk(2)) versus log(l/rk) is estimated, where A(rk) is the area computed at different time resolutions and rk time resolutions at which the area have been computed. The slope quantifies complexity of the signal and it is taken as an estimate of the fractal dimension (FD). The proposed approach is used to estimate the fractal dimension of parametric fractal signals with known fractal dimensions and the method has given accurate results. The estimation accuracy of the method is compared with that of Higuchi's and Sevcik's methods. The proposed method has given more accurate results when compared with that of Sevcik's method and the results are comparable to that of the Higuchi's method. The practical application of the complexity measure in detecting change in complexity of signals is discussed using real sleep electroencephalogram recordings from eight different subjects. The FD-based approach has shown good performance in discriminating different stages of sleep.
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.
Resumo:
The issue of dynamic spectrum scene analysis in any cognitive radio network becomes extremely complex when low probability of intercept, spread spectrum systems are present in environment. The detection and estimation become more complex if frequency hopping spread spectrum is adaptive in nature. In this paper, we propose two phase approach for detection and estimation of frequency hoping signals. Polyphase filter bank has been proposed as the architecture of choice for detection phase to efficiently detect the presence of frequency hopping signal. Based on the modeling of frequency hopping signal it can be shown that parametric methods of line spectral analysis are well suited for estimation of frequency hopping signals if the issues of order estimation and time localization are resolved. An algorithm using line spectra parameter estimation and wavelet based transient detection has been proposed which resolves above issues in computationally efficient manner suitable for implementation in cognitive radio. The simulations show promising results proving that adaptive frequency hopping signals can be detected and demodulated in a non cooperative context, even at a very low signal to noise ratio in real time.
Resumo:
We consider the problem of estimating the optimal parameter trajectory over a finite time interval in a parameterized stochastic differential equation (SDE), and propose a simulation-based algorithm for this purpose. Towards this end, we consider a discretization of the SDE over finite time instants and reformulate the problem as one of finding an optimal parameter at each of these instants. A stochastic approximation algorithm based on the smoothed functional technique is adapted to this setting for finding the optimal parameter trajectory. A proof of convergence of the algorithm is presented and results of numerical experiments over two different settings are shown. The algorithm is seen to exhibit good performance. We also present extensions of our framework to the case of finding optimal parameterized feedback policies for controlled SDE and present numerical results in this scenario as well.
Resumo:
The finite element method (FEM) is used to determine for pitch-point, mid-point and tip loading, the deflection curve of a Image 1 diamentral pitch (DP) standard spur gear tooth corresponding to number of teeth of 14, 21, 26 and 34. In all these cases the deflection of the gear tooth at the point of loading obtained by FEM is in good agreement with the experimental value. The contraflexure in the deflection curve at the point of loading observed experimentally in the cases of pitch-point and mid-point loading, is predicted correctly by the FEM analysis.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are generally estimated be fitting theoretical models to data gathered from field monitoring or laboratory experiments. Transient through-diffusion tests are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. Thes parameters are usually estimated either by approximate eye-fitting calibration or by combining the solution of the direct problem with any available gradient-based techniques. In this work, an automated, gradient-free solver is developed to estimate the mass transport parameters of a transient through-diffusion model. The proposed inverse model uses a particle swarm optimization (PSO) algorithm that is based on the social behavior of animals searching for food sources. The finite difference numerical solution of the forward model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation. The working principle of the new solver is demonstrated and mass transport parameters are estimated from laboratory through-diffusion experimental data. An inverse model based on the standard gradient-based technique is formulated to compare with the proposed solver. A detailed comparative study is carried out between conventional methods and the proposed solver. The present automated technique is found to be very efficient and robust. The mass transport parameters are obtained with great precision.
Resumo:
The average population age has been increasing for decades. In the U.S., the history of retirement communities in some states is relatively long, reaching back to the 1920s. In Finland, with one of the fastest-growing elderly population and highest total dependency ratios, seniors housing is a relatively new market within the residential housing business. Some studies have reported that only a small percentage of seniors are willing to move into age-restricted communities in Finland. This study analyzes awareness and attitudes of Finnish people toward age-restricted housing for seniors and toward seniors living in these communities. The results show that the majority of Finns were undecided if “senior houses” were the same as assisted living facilities. The respondents associated age-restricted communities with institutional housing for lonely elderly people with illnesses. The results of this study will help investors and developers understand how potential customers see age-restricted housing for seniors. Also, managers of senior houses can use the results for clarifying the idea of senior houses.
Resumo:
Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs) in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA) form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.
Resumo:
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.
Resumo:
Objective This study explored the dimensionality and measurement invariance of the 25-item Connor-Davidson Resilience Scale (CD-RISC; Connor & Davidson, 2003) across samples of adult (n = 321; aged 20–36) and adolescent (n = 199; aged 12–18) Australian cricketers. Design Cross-sectional, self-report survey Methods An online, multi-section questionnaire. Results Confirmatory factor and item level analyses supported the psychometric superiority of a revised 10-item, unidimensional model of resilience over the original 25-item, five-factor measurement model. Positive and moderate correlations with hardiness as well as negative and moderate correlations with burnout components were evidenced thereby providing support for the convergent validity of the unidimensional model. Measurement invariance analyses of the unidimensional model across the two age-group samples supported configural (i.e., same factor structure across groups), metric (i.e., same pattern of factor loadings across the groups), and partial scalar invariance (i.e., mostly the same intercepts across the groups). Conclusion Evidence for a psychometrically sound measure of resilient qualities of the individual provides an important foundation upon which researchers can identify the antecedents to and outcomes of resilience in sport contexts.
Resumo:
This paper investigates the effect that text pre-processing approaches have on the estimation of the readability of web pages. Readability has been highlighted as an important aspect of web search result personalisation in previous work. The most widely used text readability measures rely on surface level characteristics of text, such as the length of words and sentences. We demonstrate that different tools for extracting text from web pages lead to very different estimations of readability. This has an important implication for search engines because search result personalisation strategies that consider users reading ability may fail if incorrect text readability estimations are computed.
Resumo:
Historically, determining the country of origin of a published work presented few challenges, because works were generally published physically – whether in print or otherwise – in a distinct location or few locations. However, publishing opportunities presented by new technologies mean that we now live in a world of simultaneous publication – works that are first published online are published simultaneously to every country in world in which there is Internet connectivity. While this is certainly advantageous for the dissemination and impact of information and creative works, it creates potential complications under the Berne Convention for the Protection of Literary and Artistic Works (“Berne Convention”), an international intellectual property agreement to which most countries in the world now subscribe. Under the Berne Convention’s national treatment provisions, rights accorded to foreign copyright works may not be subject to any formality, such as registration requirements (although member countries are free to impose formalities in relation to domestic copyright works). In Kernel Records Oy v. Timothy Mosley p/k/a Timbaland, et al. however, the Florida Southern District Court of the United States ruled that first publication of a work on the Internet via an Australian website constituted “simultaneous publication all over the world,” and therefore rendered the work a “United States work” under the definition in section 101 of the U.S. Copyright Act, subjecting the work to registration formality under section 411. This ruling is in sharp contrast with an earlier decision delivered by the Delaware District Court in Håkan Moberg v. 33T LLC, et al. which arrived at an opposite conclusion. The conflicting rulings of the U.S. courts reveal the problems posed by new forms of publishing online and demonstrate a compelling need for further harmonization between the Berne Convention, domestic laws and the practical realities of digital publishing. In this chapter, we argue that even if a work first published online can be considered to be simultaneously published all over the world it does not follow that any country can assert itself as the “country of origin” of the work for the purpose of imposing domestic copyright formalities. More specifically, we argue that the meaning of “United States work” under the U.S. Copyright Act should be interpreted in line with the presumption against extraterritorial application of domestic law to limit its application to only those works with a real and substantial connection to the United States. There are gaps in the Berne Convention’s articulation of “country of origin” which provide scope for judicial interpretation, at a national level, of the most pragmatic way forward in reconciling the goals of the Berne Convention with the practical requirements of domestic law. We believe that the uncertainties arising under the Berne Convention created by new forms of online publishing can be resolved at a national level by the sensible application of principles of statutory interpretation by the courts. While at the international level we may need a clearer consensus on what amounts to “simultaneous publication” in the digital age, state practice may mean that we do not yet need to explore textual changes to the Berne Convention.
Resumo:
Grain feeding low bodyweight, cast-for-age (CFA) sheep from pastoral areas of eastern Australia at the end of the growing season can enable critical carcass weight grades to be achieved and thus yield better economic returns. The aim of this work was to compare growth and carcass characteristics for CFA Merino ewes consuming either simple diets based on whole sorghum grain or commercial feed pellets. The experiment also compared various sources of additional nitrogen (N) for inclusion in sorghum diets and evaluated several introductory regimes. Seventeen ewes were killed initially to provide baseline carcass data and the remaining 301 ewes were gradually introduced to the concentrate diets over 14 days before being fed concentrates and wheaten hay ad libitum for 33 or 68 days. Concentrate treatments were: (i) commercial feed pellets, (ii) sorghum mix (SM; whole sorghum grain, limestone, salt and molasses) + urea and ammonium sulfate (SMU), (iii) SMU + whole cottonseed at 286 g/kg of concentrate dry matter (DM), (iv) SM + cottonseed meal at 139 g/kg of concentrate DM, (v) SMU + virginiamycin (20 mg/kg of concentrate) for the first 21 days of feeding, and (vi) whole cottonseed gradually replaced by SMU over the first 14 days of feeding. The target carcass weight of 18 kg was achieved after only 33 days on feed for the pellets and the SM + cottonseed meal diet. All other whole grain sorghum diets required between 33 and 68 days on feed to achieve the target carcass weight. Concentrates based on whole sorghum grain generally produced significantly (P < 0.05) lower carcass weight and fat score than pellets and this may have been linked to the significantly (P < 0.05) higher faecal starch concentrations for ewes consuming sorghum-based diets (270 v. 72 g/kg DM on day 51 of feeding for sorghum-based diets and pellets, respectively). Source of N in whole grain sorghum rations and special introductory regimes had no significant (P > 0.05) effects on carcass weight or fat score of ewes with the exception of carcass weight for SMU + whole cottonseed being significantly lower than SM + cottonseed meal at day 33. Ewes finished on all diets produced acceptable carcasses although muscle pH was high in all ewe carcasses (average 5.8 and 5.7 at 33 and 68 days, respectively). There were no significant (P > 0.05) differences between diets in concentrate DM intake, rumen fluid pH, meat colour score, fat colour score, eye muscle area, meat pH or meat temperature.