870 resultados para Confusion Assessment Method
Resumo:
The aim of this study was to estimate the development of fertility in North-Central Namibia, former Ovamboland, from 1960 to 2001. Special attention was given to the onset of fertility decline and to the impact of the HIV epidemic on fertility. An additional aim was to introduce parish registers as a source of data for fertility research in Africa. Data used consisted of parish registers from Evangelical Lutheran congregations, the 1991 and 2001 Population and Housing Censuses, the 1992 and 2000 Namibia Demographic and Health Surveys, and the HIV sentinel surveillances of 1992-2004. Both period and cohort fertility were analysed. The P/F ratio method was used when analysing census data. The impact of HIV infection on fertility was estimated indirectly by comparing the fertility histories of women who died at an age of less than 50 years with the fertility of other women. The impact of the HIV epidemic on fertility was assessed both among infected women and in the general population. Fertility in the study population began to decline in 1980. The decline was rapid during the 1980s, levelled off in the early 1990s at the end of war of independence and then continued to decline until the end of the study period. According to parish registers, total fertility was 6.4 in the 1960s and 6.5 in the 1970s, and declined to 5.1 in the 1980s and 4.2 in the 1990s. Adjustment of these total fertility rates to correspond to levels of fertility based on data from the 1991 and 2001 censuses resulted in total fertility declining from 7.6 in 1960-79 to 6.0 in 1980-89, and to 4.9 in 1990-99. The decline was associated with increased age at first marriage, declining marital fertility and increasing premarital fertility. Fertility among adolescents increased, whereas the fertility of women in all other age groups declined. During the 1980s, the war of independence contributed to declining fertility through spousal separation and delayed marriages. Contraception has been employed in the study region since the 1980s, but in the early 1990s, use of contraceptives was still so limited that fertility was higher in North-Central Namibia than in other regions of the country. In the 1990s, fertility decline was largely a result of the increased prevalence of contraception. HIV prevalence among pregnant women increased from 4% in 1992 to 25% in 2001. In 2001, total fertility among HIV-infected women (3.7) was lower than that among other women (4.8), resulting in total fertility of 4.4 among the general population in 2001. The HIV epidemic explained more than a quarter of the decline in total fertility at population level during most of the 1990s. The HIV epidemic also reduced the number of children born by reducing the number of potential mothers. In the future, HIV will have an extensive influence on both the size and age structure of the Namibian population. Although HIV influences demographic development through both fertility and mortality, the effect through changes in fertility will be smaller than the effect through mortality. In the study region, as in some other regions of southern Africa, a new type of demographic transition is under way, one in which population growth stagnates or even reverses because of the combined effects of declining fertility and increasing mortality, both of which are consequences of the HIV pandemic.
Resumo:
This paper describes a concept for a collision avoidance system for ships, which is based on model predictive control. A finite set of alternative control behaviors are generated by varying two parameters: offsets to the guidance course angle commanded to the autopilot and changes to the propulsion command ranging from nominal speed to full reverse. Using simulated predictions of the trajectories of the obstacles and ship, compliance with the Convention on the International Regulations for Preventing Collisions at Sea and collision hazards associated with each of the alternative control behaviors are evaluated on a finite prediction horizon, and the optimal control behavior is selected. Robustness to sensing error, predicted obstacle behavior, and environmental conditions can be ensured by evaluating multiple scenarios for each control behavior. The method is conceptually and computationally simple and yet quite versatile as it can account for the dynamics of the ship, the dynamics of the steering and propulsion system, forces due to wind and ocean current, and any number of obstacles. Simulations show that the method is effective and can manage complex scenarios with multiple dynamic obstacles and uncertainty associated with sensors and predictions.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.
Resumo:
One of the foremost design considerations in microelectronics miniaturization is the use of embedded passives which provide practical solution. In a typical circuit, over 80 percent of the electronic components are passives such as resistors, inductors, and capacitors that could take up to almost 50 percent of the entire printed circuit board area. By integrating passive components within the substrate instead of being on the surface, embedded passives reduce the system real estate, eliminate the need for discrete and assembly, enhance electrical performance and reliability, and potentially reduce the overall cost. Moreover, it is lead free. Even with these advantages, embedded passive technology is at a relatively immature stage and more characterization and optimization are needed for practical applications leading to its commercialization.This paper presents an entire process from design and fabrication to electrical characterization and reliability test of embedded passives on multilayered microvia organic substrate. Two test vehicles focusing on resistors and capacitors have been designed and fabricated. Embedded capacitors in this study are made with polymer/ceramic nanocomposite (BaTiO3) material to take advantage of low processing temperature of polymers and relatively high dielectric constant of ceramics and the values of these capacitors range from 50 pF to 1.5 nF with capacitance per area of approximately 1.5 nF/cm(2). Limited high frequency measurement of these capacitors was performed. Furthermore, reliability assessments of thermal shock and temperature humidity tests based on JEDEC standards were carried out. Resistors used in this work have been of three types: 1) carbon ink based polymer thick film (PTF), 2) resistor foils with known sheet resistivities which are laminated to printed wiring board (PWB) during a sequential build-up (SBU) process and 3) thin-film resistor plating by electroless method. Realization of embedded resistors on conventional board-level high-loss epoxy (similar to 0.015 at 1 GHz) and proposed low-loss BCB dielectric (similar to 0.0008 at > 40 GHz) has been explored in this study. Ni-P and Ni-W-P alloys were plated using conventional electroless plating, and NiCr and NiCrAlSi foils were used for the foil transfer process. For the first time, Benzocyclobutene (BCB) has been proposed as a board level dielectric for advanced System-on-Package (SOP) module primarily due to its attractive low-loss (for RF application) and thin film (for high density wiring) properties.Although embedded passives are more reliable by eliminating solder joint interconnects, they also introduce other concerns such as cracks, delamination and component instability. More layers may be needed to accommodate the embedded passives, and various materials within the substrate may cause significant thermo -mechanical stress due to coefficient of thermal expansion (CTE) mismatch. In this work, numerical models of embedded capacitors have been developed to qualitatively examine the effects of process conditions and electrical performance due to thermo-mechanical deformations.Also, a prototype working product with the board level design including features of embedded resistors and capacitors are underway. Preliminary results of these are presented.
Resumo:
Purpose: To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms.Methods: A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model base numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. Results: The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. Conclusions: The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3456441]
Resumo:
An application of direct methods to dynamic security assessment of power systems using structure-preserving energy functions (SPEF) is presented. The transient energy margin (TEM) is used as an index for checking the stability of the system as well as ranking the contigencies based on their severity. The computation of the TEM requires the evaluation of the critical energy and the energy at fault clearing. Usually this is done by simulating the faulted trajectory, which is time-consuming. In this paper, a new algorithm which eliminates the faulted trajectory estimation is presented to calculate the TEM. The system equations and the SPEF are developed using the centre-of-inertia (COI) formulation and the loads are modelled as arbitrary functions of the respective bus voltages. The critical energy is evaluated using the potential energy boundary surface (PEBS) method. The method is illustrated by considering two realistic power system examples.
Resumo:
Three independent studies have been reported on the free energy of formation of NiWO4. Results of these measurements are analyzed by the �third-law� method, using thermal functions for NiWO4 derived from both low and high temperature heat capacity measurements. Values for the standard molar enthalpy of formation of NiWO4 at 298·15 K obtained from �third-law� analysis are compared with direct calorimetric determinations. Only one set of free energy measurements is found to be compatible with calorimetric enthalpies of formation. The selected value for ?f H m 0 (NiWO4, cr, 298·15 K) is the average of the three calorimetric measurements, using both high temperature solution and combustion techniques, and the compatible free energy determination. A new set of evaluated data for NiWO4 is presented.
Resumo:
Seismic design of reinforced soil structures involves many uncertainties that arise from the backfill soil properties and tensile strength of the reinforcement which is not addressed in current design guidelines. This paper highlights the significance of variability in the internal stability assessment of reinforced soil structures. Reliability analysis is applied to estimate probability of failure and pseudo‐static approach has been used for the calculation of the tensile strength and length of the reinforcement needed to maintain the internal stability against tension and pullout failures. Logarithmic spiral failure surface has been considered in conjunction with the limit equilibrium method. Two modes of failure namely, tension failure and pullout failure have been considered. The influence of variations of the backfill soil friction angle, the tensile strength of reinforcement, horizontal seismic acceleration on the reliability index against tension failure and pullout failure of reinforced earth structure have been discussed.
Resumo:
Sandalwood is an economically important aromatic tree belonging to the family Santalaceae. The trees are used mainly for their fragrant heartwood and oil that have immense potential for foreign exchange. Very little information is available on the genetic diversity in this species. Hence studies were initiated and genetic diversity estimated using RAPD markers in 51 genotypes of Santalum album procured from different geographcial regions of India and three exotic lines of S. spicatum from Australia. Eleven selected Operon primers (10mer) generated a total of 156 consistent and unambiguous amplification products ranging from 200bp to 4kb. Rare and genotype specific bands were identified which could be effectively used to distinguish the genotypes. Genetic relationships within the genotypes were evaluated by generating a dissimilarity matrix based on Ward's method (Squared Euclidean distance). The phenetic dendrogram and the Principal Component Analysis generated, separated the 51 Indian genotypes from the three Australian lines. The cluster analysis indicated that sandalwood germplasm within India constitutes a broad genetic base with values of genetic dissimilarity ranging from 15 to 91 %. A core collection of 21 selected individuals revealed the same diversity of the entire population. The results show that RAPD analysis is an efficient marker technology for estimating genetic diversity and relatedness, thereby enabling the formulation of appropriate strategies for conservation, germplasm management, and selection of diverse parents for sandalwood improvement programmes.
Resumo:
In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.
Resumo:
Genetic Algorithms (GAs) are recognized as an alternative class of computational model, which mimic natural evolution to solve problems in a wide domain including machine learning, music generation, genetic synthesis etc. In the present study Genetic Algorithm has been employed to obtain damage assessment of composite structural elements. It is considered that a state of damage can be modeled as reduction in stiffness. The task is to determine the magnitude and location of damage. In a composite plate that is discretized into a set of finite elements, if a jth element is damaged, the GA based technique will predict the reduction in Ex and Ey and the location j. The fact that the natural frequency decreases with decrease in stiffness is made use of in the method. The natural frequency of any two modes of the damaged plates for the assumed damage parameters is facilitated by the use of Eigen sensitivity analysis. The Eigen value sensitivities are the derivatives of the Eigen values with respect to certain design parameters. If ωiu is the natural frequency of the ith mode of the undamaged plate and ωid is that of the damaged plate, with δωi as the difference between the two, while δωk is a similar difference in the kth mode, R is defined as the ratio of the two. For a random selection of Ex,Ey and j, a ratio Ri is obtained. A proper combination of Ex,Ey and j which makes Ri−R=0 is obtained by Genetic Algorithm.
Resumo:
Artificial Neural Networks (ANNs) have recently been proposed as an alterative method for salving certain traditional problems in power systems where conventional techniques have not achieved the desired speed, accuracy or efficiency. This paper presents application of ANN where the aim is to achieve fast voltage stability margin assessment of power network in an energy control centre (ECC), with reduced number of appropriate inputs. L-index has been used for assessing voltage stability margin. Investigations are carried out on the influence of information encompassed in input vector and target out put vector, on the learning time and test performance of multi layer perceptron (MLP) based ANN model. LP based algorithm for voltage stability improvement, is used for generating meaningful training patterns in the normal operating range of the system. From the generated set of training patterns, appropriate training patterns are selected based on statistical correlation process, sensitivity matrix approach, contingency ranking approach and concentric relaxation method. Simulation results on a 24 bus EHV system, 30 bus modified IEEE system, and a 82 bus Indian power network are presented for illustration purposes.
Resumo:
This paper presents the details of nonlinear finite element analysis (FEA) of three point bending specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Brief details about characterization and experimentation of HSC, HSC1 and UHSC have been provided. Cracking strength criterion has been used for simulation of crack propagation by conducting nonlinear FEA. The description about FEA using crack strength criterion has been outlined. Bi-linear tension softening relation has been used for modeling the cohesive stresses ahead of the crack tip. Numerical studies have been carried out on fracture analysis of three point bending specimens. It is observed from the studies that the computed values from FEA are in very good agreement with the corresponding experimental values. The computed values of stress vs crack width will be useful for evaluation of fracture energy, crack tip opening displacement and fracture toughness. Further, these values can also be used for crack growth study, remaining life assessment and residual strength evaluation of concrete structural components.
Resumo:
There has been growing interest in understanding energy metabolism in human embryos generated using assisted reproductive techniques (ART) for improving the overall success rate of the method. Using NMR spectroscopy as a noninvasive tool, we studied human embryo metabolism to identify specific biomarkers to assess the quality of embryos for their implantation potential. The study was based on estimation of pyruvate, lactate and alanine levels in the growth medium, ISM1, used in the culture of embryos. An NMR study involving 127 embryos from 48 couples revealed that embryos transferred on Day 3 (after 72 h in vitro culture) with successful implantation (pregnancy) exhibited significantly (p < 10(-5)) lower pyruvate/alanine ratios compared to those that failed to implant. Lactate levels in media were similar for all embryos. This implies that in addition to lactate production, successfully implanted embryos use pyruvate to produce alanine and other cellular functions. While pyruvate and alanine individually have been used as biomarkers, the present study highlights the potential of combining them to provide a single parameter that correlates strongly with implantation potential. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]