933 resultados para estimation of parameters
Resumo:
NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
The development of molecular markers for genomic studies in Mangifera indica (mango) will allow marker-assisted selection and identification of genetically diverse germplasm, greatly aiding mango breeding programs. We report here our identification of thousands of unambiguous molecular markers that can be easily assayed across genotypes of the species. With origin centered in Southeast Asia, mangos are grown throughout the tropics and subtropics as a nutritious fruit that exhibits remarkable intraspecific phenotypic diversity. With the goal of building a high density genetic map, we have undertaken discovery of sequence variation in expressed genes across a broad range of mango cultivars. A transcriptome sequence reference was built de novo from extensive sequencing and assembly of RNA from cultivar 'Tommy Atkins'. Single nucleotide polymorphisms (SNPs) in protein coding transcripts were determined from alignment of RNA reads from 24 mango cultivars of diverse origins: 'Amin Abrahimpur' (India), 'Aroemanis' (Indonesia), 'Burma' (Burma), 'CAC' (Hawaii), 'Duncan' (Florida), 'Edward' (Florida), 'Everbearing' (Florida), 'Gary' (Florida), 'Hodson' (Florida), 'Itamaraca' (Brazil), 'Jakarata' (Florida), 'Long' (Jamaica), 'M. Casturi Purple' (Borneo), 'Malindi' (Kenya), 'Mulgoba' (India), 'Neelum' (India), 'Peach' (unknown), 'Prieto' (Cuba), 'Sandersha' (India), 'Tete Nene' (Puerto Rico), 'Thai Everbearing' (Thailand), 'Toledo' (Cuba), 'Tommy Atkins' (Florida) and 'Turpentine' (West Indies). SNPs in a selected subset of protein coding transcripts are currently being converted into Fluidigm assays for genotyping of mapping populations and germplasm collections. Using an alternate approach, SNPs (144) discovered by sequencing of candidate genes in 'Kensington Pride' have already been converted and used for genotyping.
Resumo:
It has been proposed that long-term consumption of diets rich in non-digestible carbohydrates (NDCs), such as cereals, fruit and vegetables might protect against several chronic diseases, however, it has been difficult to fully establish their impact on health in epidemiology studies. The wide range properties of the different NDCs may dilution their impact when they are combined in one category for statistical comparisons in correlations or multivariate analysis. Several mechanisms have been suggested to explain the protective effects of NDCs, including increased stool bulk, dilution of carcinogens in the colonic lumen, reduced transit time, lowering pH, and bacterial fermentation to short chain fatty acids (SCFA) in the colon. However, it is very difficult to measure SCFA in humans in vivo with any accuracy, so epidemiological studies on the impact of SCFA are not feasible. Most studies use dietary fibre (DF) or Non-Starch Polysaccharides (NSP) intake to estimate the levels, but not all fibres or NSP are equally fermentable. It has been proposed that long-term consumption of diets rich in non-digestible carbohydrates (NDCs), such as cereals, fruit and vegetables might protect against several chronic diseases, however, it has been difficult to fully establish their impact on health in epidemiology studies. The wide range properties of the different NDCs may dilution their impact when they are combined in one category for statistical comparisons in correlations or multivariate analysis. Several mechanisms have been suggested to explain the protective effects of NDCs, including increased stool bulk, dilution of carcinogens in the colonic lumen, reduced transit time, lowering pH, and bacterial fermentation to short chain fatty acids (SCFA) in the colon. However, it is very difficult to measure SCFA in humans in vivo with any accuracy, so epidemiological studies on the impact of SCFA are not feasible. Most studies use dietary fibre (DF) or Non-Starch Polysaccharides (NSP) intake to estimate the levels, but not all fibres or NSP are equally fermentable. The first aim of this thesis was the development of the equations used to estimate the amount of FC that reaches the human colon and is fermented fully to SCFA by the colonic bacteria. Therefore, several studies were examined for evidence to determine the different percentages of each type of NDCs that should be included in the final model, based on how much NDCs entered the colon intact and also to what extent they were fermented to SCFA in vivo. Our model equations are FC-DF or NSP$ 1: 100 % Soluble + 10 % insoluble + 100 % NDOs¥ + 5 % TS** FC-DF or NSP 2: 100 % Soluble + 50 % insoluble + 100 % NDOs + 5 % TS FC-DF* or NSP 3: 100 % Soluble + 10 % insoluble + 100 % NDOs + 10 % TS FC-DF or NSP 4: 100 % Soluble + 50 % insoluble + 100 % NDOs + 10 % TS *DF: Dietary fibre; **TS: Total starch; $NSP: non-starch polysaccharide; ¥NDOs: non-digestible oligosaccharide The second study of this thesis aimed to examine all four predicted FC-DF and FC-NSP equations developed, to estimate FC from dietary records against urinary colonic NDCs fermentation biomarkers. The main finding of a cross-sectional comparison of habitual diet with urinary excretion of SCFA products, showed weak but significant correlation between the 24 h urinary excretion of SCFA and acetate with the estimated FC-DF 4 and FC-NSP 4 when considering all of the study participants (n = 122). Similar correlations were observed with the data for valid participants (n = 78). It was also observed that FC-DF and FC-NSP had positive correlations with 24 h urinary acetate and SCFA compared with DF and NSP alone. Hence, it could be hypothesised that using the developed index to estimate FC in the diet form dietary records, might predict SCFA production in the colon in vivo in humans. The next study in this thesis aimed to validate the FC equations developed using in vitro models of small intestinal digestion and human colon fermentation. The main findings in these in vitro studies were that there were several strong agreements between the amounts of SCFA produced after actual in vitro fermentation of single fibre and different mixtures of NDCs, and those predicted by the estimated FC from our developed equation FC-DF 4. These results which demonstrated a strong relationship between SCFA production in vitro from a range of fermentations of single fibres and mixtures of NDCs and that from the predicted FC equation, support the use of the FC equation for estimation of FC from dietary records. Therefore, we can conclude that the newly developed predicted equations have been deemed a valid and practical tool to assess SCFA productions for in vitro fermentation.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Aircraft altimeter and in situ measurements are used to examine relationships between altimeter backscatter and the magnitude of near-surface wind and friction velocities. Comparison of altimeter radar cross section with wind speed is made through the modified Chelton-Wentz algorithm. Improved agreement is found after correcting 10-m winds for both surface current and atmospheric stability. An altimeter friction velocity algorithm is derived based on the wind speed model and an open-ocean drag coefficient. Close agreement between altimeter- and in situ-derived friction velocities is found. For this dataset, quality of the altimeter inversion to surface friction velocity is comparable to that for adjusted winds and clearly better than the inversion to true 10-m wind speed.
Resumo:
This work evaluates the mercury (Hg) contamination status (sediments and biota) of the Bijagós archipelago, off the coast of Guinea-Bissau. Sediments exhibited very low concentrations (<1-12ngg(-1)), pointing to negligible sources of anthropogenic Hg in the region. Nevertheless, Hg is well correlated to the fine fraction, aluminium, and loss on ignition, indicating the effect of grain size and organic matter content on the presence of Hg in sediments. Mercury in the bivalves Tagelus adansoni and Senilia senilis did not vary considerably among sites, ranging within narrow intervals (0.09-0.12 and 0.12-0.14μgg(-1) (dry weight), respectively). Divergent substrate preferences/feeding tactics may justify slight differences between species. The value 11ngg(-1) is proposed as the sediment background concentration for this West-African coastal region, and concentrations within the interval 8-10ngg(-1) (wet weight) may be considered as reference range for S. senilis and T. adansoni in future monitoring studies.
Resumo:
In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.
Resumo:
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.
Resumo:
Digital rock physics combines modern imaging with advanced numerical simulations to analyze the physical properties of rocks -- In this paper we suggest a special segmentation procedure which is applied to a carbonate rock from Switzerland -- Starting point is a CTscan of a specimen of Hauptmuschelkalk -- The first step applied to the raw image data is a nonlocal mean filter -- We then apply different thresholds to identify pores and solid phases -- Because we are aware of a nonneglectable amount of unresolved microporosity we also define intermediate phases -- Based on this segmentation determine porositydependent values for the pwave velocity and for the permeability -- The porosity measured in the laboratory is then used to compare our numerical data with experimental data -- We observe a good agreement -- Future work includes an analytic validation to the numerical results of the pwave velocity upper bound, employing different filters for the image segmentation and using data with higher resolution
Resumo:
The nutritional contribution of the dietary nitrogen, carbon and total dry matter supplied by fish meal (FM), soy protein isolate (SP) and corn gluten (CG) to the growth of Pacific white shrimp Litopenaeus vannamei was assessed by means of isotopic analyses. As SP and CG are ingredients derived from plants having different photosynthetic pathways which imprint specific carbon isotope values to plant tissues, their isotopic values were contrasting. FM is isotopically different to these plant meals with regards to both, carbon and nitrogen. Such natural isotopic differences were used to design experimental diets having contrasting isotopic signatures. Seven isoproteic (36% crude protein), isoenergetic (4.7 kcal g−1) diets were formulated; three diets consisted in isotopic controls manufactured with only one main ingredient supplying dietary nitrogen and carbon: 100% FM (diet 100F), 100% SP (diet 100S) and 100% CG (diet 100G). Four more diets were formulated with varying mixtures of these three ingredients, one included 33% of each ingredient on a dietary nitrogen basis (diet 33FSG) and the other three included a proportion 50:25:25 for each of the three ingredients (diets 50FSG, 50SGF and 50GFS). At the end of the bioassay there were no significant differences in growth rate in shrimps fed on the four mixed diets and diet 100F (k=0.215–0.224). Growth rates were significantly lower (k=0.163–0.201) in shrimps grown on diets containing only plant meals. Carbon and nitrogen stable isotope values (δ13C and δ15N) were measured in experimental diets and shrimp muscle tissue and results were incorporated into a three-source, two-isotope mixing model. The relative contributions of dietary nitrogen, carbon and total dry matter from FM, SP and CG to growth were statistically similar to the proportions established in most of the diets after correcting for the apparent digestibility coefficients of the ingredients. Dietary nitrogen available in diet 33FSG was incorporated in muscle tissue at proportions representing 24, 35 and 41% of the respective ingredients. Diet 50GSF contributed significantly higher amounts of dietary nitrogen from CG than from FM. When the level of dietary nitrogen derived from FM was increased in diet 50FSG, nutrient contributions were more comparable to the available dietary proportions as there was an incorporation of 44, 29 and 27% from FM, SP and CG, respectively. Nutritional contributions from SP were very consistent to the dietary proportions established in the experimental diets.