884 resultados para Problem analysis
Resumo:
The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The standard approach to modelling production under uncertainty has relied on the concept of the stochastic production function. In the present paper, it is argued that a state-contingent production model is more flexible and realistic. The model is applied to the problem of drought policy.
Resumo:
Objective: To evaluate the cost of atrial fibrillation (AF) to health and social services in the UK in 1995 and, based on epidemiological trends, to project this estimate to 2000. Design, setting, and main outcome measures: Contemporary estimates of health care activity related to AF were applied to the whole population of the UK on an age and sex specific basis for the year 1995. The activities considered ( and costs calculated) were hospital admissions, outpatient consultations, general practice consultations, and drug treatment ( including the cost of monitoring anticoagulant treatment). By adjusting for the progressive aging of the British population and related increases in hospital admissions, the cost of AF was also projected to the year 2000. Results: There were 534 000 people with AF in the UK during 1995. The direct'' cost of health care for these patients was pound 244 million (similar toE350 million) or 0.62% of total National Health Service ( NHS) expenditure. Hospitalisations and drug prescriptions accounted for 50% and 20% of this expenditure, respectively. Long term nursing home care after hospital admission cost an additional pound46.4 million (similar toE66 million). The direct cost of AF rose to pound459 million (similar toE655 million) in 2000, equivalent to 0.97% of total NHS expenditure based on 1995 figures. Nursing home costs rose to pound111 million (similar toE160 million). Conclusions: AF is an extremely costly public health problem.
Resumo:
Rumor discourse has been conceptualized as an attempt to reduce anxiety and uncertainty via a process of social sensemaking. Fourteen rumors transmitted on various Internet discussion groups were observed and content analyzed over the life of each rumor With this (previously unavailable) more ecologically robust methodology, the intertwined threads of sensemaking and the gaining of interpretive control are clearly evident in the tapestry of rumor discourse. We propose a categorization of statements (the Rumor Interaction Analysis System) and find differences between dread rumors and wish rumors in anxiety-related content categories. Cluster analysis of these statements reveals a typology of voices (communicative postures) exhibiting sensemaking activities of the rumor discussion group, such as hypothesizing, skeptical critique, directing of activities to gain information, and presentation of evidence. These findings enrich our understanding of the long-implicated sensemaking function of rumor by clarifying the elements of communication that operate in rumor's social context.
Resumo:
Research expeditions into remote areas to collect biological specimens provide vital information for understanding biodiversity. However, major expeditions to little-known areas are expensive and time consuming, time is short, and well-trained people are difficult to find. In addition, processing the collections and obtaining accurate identifications takes time and money. In order to get the maximum return for the investment, we need to determine the location of the collecting expeditions carefully. In this study we used environmental variables and information on existing collecting localities to help determine the sites of future expeditions. Results from other studies were used to aid in the selection of the environmental variables, including variables relating to temperature, rainfall, lithology and distance between sites. A survey gap analysis tool based on 'ED complementarity' was employed to select the sites that would most likely contribute the most new taxa. The tool does not evaluate how well collected a previously visited site survey site might be; however, collecting effort was estimated based on species accumulation curves. We used the number of collections and/or number of species at each collecting site to eliminate those we deemed poorly collected. Plants, birds, and insects from Guyana were examined using the survey gap analysis tool, and sites for future collecting expeditions were determined. The south-east section of Guyana had virtually no collecting information available. It has been inaccessible for many years for political reasons and as a result, eight of the first ten sites selected were in that area. In order to evaluate the remainder of the country, and because there are no immediate plans by the Government of Guyana to open that area to exploration, that section of the country was not included in the remainder of the study. The range of the ED complementarity values dropped sharply after the first ten sites were selected. For plants, the group for which we had the most records, areas selected included several localities in the Pakaraima Mountains, the border with the south-east, and one site in the north-west. For birds, a moderately collected group, the strongest need was in the north-west followed by the east. Insects had the smallest data set and the largest range of ED complementarity values; the results gave strong emphasis to the southern parts of the country, but most of the locations appeared to be equidistant from one another, most likely because of insufficient data. Results demonstrate that the use of a survey gap analysis tool designed to solve a locational problem using continuous environmental data can help maximize our resources for gathering new information on biodiversity. (c) 2005 The Linnean Society of London.
Resumo:
A common problem encountered during the development of MS methods for the quantitation of small organic molecules by LGMS is the formation of non-covalently bound species or adducts in the electrospray interface. Often the population of the molecular ion is insignificant compared to those of all other forms of the analyte produced in the electrospray, making it difficult to obtain the sensitivity required for accurate quantitation. We have investigated the effects of the following variables: orifice potential, nebulizer gas flow, temperature, solvent composition and the sample pH on the relative distributions of ions of the types MH+, MNa+, MNH+, and 2MNa(+), where M represents a 4 small organic molecule: BAY 11-7082 ((E)-3-[4-methylphenylsulfonyl]-2-propenenitrile). Orifice potential, solvent composition and the sample pH had the greatest influence on the relative distributions of these ions, making these parameters the most useful for optimizing methods for the quantitation of small molecules.
Resumo:
Selection of machine learning techniques requires a certain sensitivity to the requirements of the problem. In particular, the problem can be made more tractable by deliberately using algorithms that are biased toward solutions of the requisite kind. In this paper, we argue that recurrent neural networks have a natural bias toward a problem domain of which biological sequence analysis tasks are a subset. We use experiments with synthetic data to illustrate this bias. We then demonstrate that this bias can be exploitable using a data set of protein sequences containing several classes of subcellular localization targeting peptides. The results show that, compared with feed forward, recurrent neural networks will generally perform better on sequence analysis tasks. Furthermore, as the patterns within the sequence become more ambiguous, the choice of specific recurrent architecture becomes more critical.
Resumo:
Obstructive sleep apnea (OSA) is a highly prevalent disease in which upper airways are collapsed during sleep, leading to serious consequences. The gold standard of diagnosis, called polysomnography (PSG), requires a full-night hospital stay connected to over ten channels of measurements requiring physical contact with sensors. PSG is inconvenient, expensive and unsuited for community screening. Snoring is the earliest symptom of OSA, but its potential in clinical diagnosis is not fully recognized yet. Diagnostic systems intent on using snore-related sounds (SRS) face the tough problem of how to define a snore. In this paper, we present a working definition of a snore, and propose algorithms to segment SRS into classes of pure breathing, silence and voiced/unvoiced snores. We propose a novel feature termed the 'intra-snore-pitch-jump' (ISPJ) to diagnose OSA. Working on clinical data, we show that ISPJ delivers OSA detection sensitivities of 86-100% while holding specificity at 50-80%. These numbers indicate that snore sounds and the ISPJ have the potential to be good candidates for a take-home device for OSA screening. Snore sounds have the significant advantage in that they can be conveniently acquired with low-cost non-contact equipment. The segmentation results presented in this paper have been derived using data from eight patients as the training set and another eight patients as the testing set. ISPJ-based OSA detection results have been derived using training data from 16 subjects and testing data from 29 subjects.
Resumo:
This study examined the genetic and environmental relationships among 5 academic achievement skills of a standardized test of academic achievement, the Queensland Core Skills Test (QCST; Queensland Studies Authority, 2003a). QCST participants included 182 monozygotic pairs and 208 dizygotic pairs (mean 17 years +/- 0.4 standard deviation). IQ data were included in the analysis to correct for ascertainment bias. A genetic general factor explained virtually all genetic variance in the component academic skills scores, and accounted for 32% to 73% of their phenotypic variances. It also explained 56% and 42% of variation in Verbal IQ and Performance IQ respectively, suggesting that this factor is genetic g. Modest specific genetic effects were evident for achievement in mathematical problem solving and written expression. A single common factor adequately explained common environmental effects, which were also modest, and possibly due to assortative mating. The results suggest that general academic ability, derived from genetic influences and to a lesser extent common environmental influences, is the primary source of variation in component skills of the QCST.
Resumo:
Purpose - In many scientific and engineering fields, large-scale heat transfer problems with temperature-dependent pore-fluid densities are commonly encountered. For example, heat transfer from the mantle into the upper crust of the Earth is a typical problem of them. The main purpose of this paper is to develop and present a new combined methodology to solve large-scale heat transfer problems with temperature-dependent pore-fluid densities in the lithosphere and crust scales. Design/methodology/approach - The theoretical approach is used to determine the thickness and the related thermal boundary conditions of the continental crust on the lithospheric scale, so that some important information can be provided accurately for establishing a numerical model of the crustal scale. The numerical approach is then used to simulate the detailed structures and complicated geometries of the continental crust on the crustal scale. The main advantage in using the proposed combination method of the theoretical and numerical approaches is that if the thermal distribution in the crust is of the primary interest, the use of a reasonable numerical model on the crustal scale can result in a significant reduction in computer efforts. Findings - From the ore body formation and mineralization points of view, the present analytical and numerical solutions have demonstrated that the conductive-and-advective lithosphere with variable pore-fluid density is the most favorite lithosphere because it may result in the thinnest lithosphere so that the temperature at the near surface of the crust can be hot enough to generate the shallow ore deposits there. The upward throughflow (i.e. mantle mass flux) can have a significant effect on the thermal structure within the lithosphere. In addition, the emplacement of hot materials from the mantle may further reduce the thickness of the lithosphere. Originality/value - The present analytical solutions can be used to: validate numerical methods for solving large-scale heat transfer problems; provide correct thermal boundary conditions for numerically solving ore body formation and mineralization problems on the crustal scale; and investigate the fundamental issues related to thermal distributions within the lithosphere. The proposed finite element analysis can be effectively used to consider the geometrical and material complexities of large-scale heat transfer problems with temperature-dependent fluid densities.
Resumo:
During the analytical method development for BAY 11-7082 ((E)-3-[4-methylphenylsulfonyl]-2-propenenitrile), using HPLC-MS-MS and HPLC-UV, we observed that the protein removal process (both ultrafiltration and precipitation method using organic solvents) prior to HPLC brought about a significant reduction in the concentration of this compound. The use of a structurally similar internal standard, BAY 11-7085 ((E)-3-[4-t-butylphenylsulfonyl]-2-propenenitrile), was not effective in compensating for the loss of analyte as the extent of reduction was different to that of the analyte. We present here a systematic investigation of this problem and a new validated method for the determination of BAY 11-7082. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Background: Pain is defined as both a sensory and an emotional experience. Acute postoperative tooth extraction pain is assessed and treated as a physiological (sensory) pain while chronic pain is a biopsychosocial problem. The purpose of this study was to assess whether psychological and social changes Occur in the acute pain state. Methods: A biopsychosocial pain questionnaire was completed by 438 subjects (165 males, 273 females) with acute postoperative pain at 24 hours following the surgical extraction of teeth and compared with 273 subjects (78 males, 195 females) with chronic orofacial pain. Statistical methods used a k-means cluster analysis. Results: Three clusters were identified in the acute pain group: 'unaffected', 'disabled' and 'depressed, anxious and disabled'. Psychosocial effects showed 24.8 per cent feeling 'distress/suffering' and 15.1 per cent 'sad and depressed'. Females reported higher pain intensity and more distress, depression and inadequate medication for pain relief (p
Resumo:
We consider a buying-selling problem when two stops of a sequence of independent random variables are required. An optimal stopping rule and the value of a game are obtained.
Resumo:
Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.
Resumo:
We revisit the one-unit gradient ICA algorithm derived from the kurtosis function. By carefully studying properties of the stationary points of the discrete-time one-unit gradient ICA algorithm, with suitable condition on the learning rate, convergence can be proved. The condition on the learning rate helps alleviate the guesswork that accompanies the problem of choosing suitable learning rate in practical computation. These results may be useful to extract independent source signals on-line.