977 resultados para Search problems
Resumo:
Liver transplantation recipients, like other solid organ transplantation recipients, have an increased risk of dermatologic problems due to their long-term immunosuppression and benefit from pre-and post-transplantation screenings, and management by a dermatologist and dermatologic care should be integrated into the comprehensive, multidisciplinary care of liver transplantation recipients [1,2]. Cutaneous findings include aesthetic alterations, infections, precancerous lesions, and malignancies. The severity of skin alterations ranges from benign, unpleasant changes to life-threatening conditions [3-5]. In addition to skin cancer diagnosis and management, visits with a dermatologist serve to educate and improve the patient's sun-protection behavior. Among all solid organ transplantations, liver transplantation requires the least amount of immunosuppression, sometimes even permitting its complete cessation [6]. As a result, patients who have undergone liver transplantation tend to have fewer dermatologic complications compared with other solid organ transplantation recipients [7]. However, due to the large volume of the liver, patients undergoing liver transplantation receive more donor lymphocytes than kidney, heart, or lung transplantation recipients. Because of the immunosuppression, the transplanted lymphocytes proliferate and rarely trigger graft-versus-host-disease [8,9]. This topic will provide an overview of dermatologic disorders that may be seen following liver transplantation. A detailed discussion of skin cancer following solid organ transplantation and the general management of patients following liver transplantation are discussed separately. (See "Development of malignancy following solid organ transplantation" and "Management of skin cancer in solid organ transplant recipients" and "Long-term management of adult liver transplant recipients".)
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
Resumo:
Swarm colonies reproduce social habits. Working together in a group to reach a predefined goal is a social behaviour occurring in nature. Linear optimization problems have been approached by different techniques based on natural models. In particular, Particles Swarm optimization is a meta-heuristic search technique that has proven to be effective when dealing with complex optimization problems. This paper presents and develops a new method based on different penalties strategies to solve complex problems. It focuses on the training process of the neural networks, the constraints and the election of the parameters to ensure successful results and to avoid the most common obstacles when searching optimal solutions.
Resumo:
Plant diseases represent a major economic and environmental problem in agriculture and forestry. Upon infection, a plant develops symptoms that affect different parts of the plant causing a significant agronomic impact. As many such diseases spread in time over the whole crop, a system for early disease detection can aid to mitigate the losses produced by the plant diseases and can further prevent their spread [1]. In recent years, several mathematical algorithms of search have been proposed [2,3] that could be used as a non-invasive, fast, reliable and cost-effective methods to localize in space infectious focus by detecting changes in the profile of volatile organic compounds. Tracking scents and locating odor sources is a major challenge in robotics, on one hand because odour plumes consists of non-uniform intermittent odour patches dispersed by the wind and on the other hand because of the lack of precise and reliable odour sensors. Notwithstanding, we have develop a simple robotic platform to study the robustness and effectiveness of different search algorithms [4], with respect to specific problems to be found in their further application in agriculture, namely errors committed in the motion and sensing and to the existence of spatial constraints due to land topology or the presence of obstacles.
Resumo:
Existe normalmente el propósito de obtener la mejor solución posible cuando se plantea un problema estructural, entendiendo como mejor la solución que cumpliendo los requisitos estructurales, de uso, etc., tiene un coste físico menor. En una primera aproximación se puede representar el coste físico por medio del peso propio de la estructura, lo que permite plantear la búsqueda de la mejor solución como la de menor peso. Desde un punto de vista práctico, la obtención de buenas soluciones—es decir, soluciones cuyo coste sea solo ligeramente mayor que el de la mejor solución— es una tarea tan importante como la obtención de óptimos absolutos, algo en general difícilmente abordable. Para disponer de una medida de la eficiencia que haga posible la comparación entre soluciones se propone la siguiente definición de rendimiento estructural: la razón entre la carga útil que hay que soportar y la carga total que hay que contabilizar (la suma de la carga útil y el peso propio). La forma estructural puede considerarse compuesta por cuatro conceptos, que junto con el material, definen una estructura: tamaño, esquema, proporción, y grueso.Galileo (1638) propuso la existencia de un tamaño insuperable para cada problema estructural— el tamaño para el que el peso propio agota una estructura para un esquema y proporción dados—. Dicho tamaño, o alcance estructural, será distinto para cada material utilizado; la única información necesaria del material para su determinación es la razón entre su resistencia y su peso especifico, una magnitud a la que denominamos alcance del material. En estructuras de tamaño muy pequeño en relación con su alcance estructural la anterior definición de rendimiento es inútil. En este caso —estructuras de “talla nula” en las que el peso propio es despreciable frente a la carga útil— se propone como medida del coste la magnitud adimensional que denominamos número de Michell, que se deriva de la “cantidad” introducida por A. G. M. Michell en su artículo seminal de 1904, desarrollado a partir de un lema de J. C. Maxwell de 1870. A finales del siglo pasado, R. Aroca combino las teorías de Galileo y de Maxwell y Michell, proponiendo una regla de diseño de fácil aplicación (regla GA), que permite la estimación del alcance y del rendimiento de una forma estructural. En el presente trabajo se estudia la eficiencia de estructuras trianguladas en problemas estructurales de flexión, teniendo en cuenta la influencia del tamaño. Por un lado, en el caso de estructuras de tamaño nulo se exploran esquemas cercanos al optimo mediante diversos métodos de minoración, con el objetivo de obtener formas cuyo coste (medido con su numero deMichell) sea muy próximo al del optimo absoluto pero obteniendo una reducción importante de su complejidad. Por otro lado, se presenta un método para determinar el alcance estructural de estructuras trianguladas (teniendo en cuenta el efecto local de las flexiones en los elementos de dichas estructuras), comparando su resultado con el obtenido al aplicar la regla GA, mostrando las condiciones en las que es de aplicación. Por último se identifican las líneas de investigación futura: la medida de la complejidad; la contabilidad del coste de las cimentaciones y la extensión de los métodos de minoración cuando se tiene en cuenta el peso propio. ABSTRACT When a structural problem is posed, the intention is usually to obtain the best solution, understanding this as the solution that fulfilling the different requirements: structural, use, etc., has the lowest physical cost. In a first approximation, the physical cost can be represented by the self-weight of the structure; this allows to consider the search of the best solution as the one with the lowest self-weight. But, from a practical point of view, obtaining good solutions—i.e. solutions with higher although comparable physical cost than the optimum— can be as important as finding the optimal ones, because this is, generally, a not affordable task. In order to have a measure of the efficiency that allows the comparison between different solutions, a definition of structural efficiency is proposed: the ratio between the useful load and the total load —i.e. the useful load plus the self-weight resulting of the structural sizing—. The structural form can be considered to be formed by four concepts, which together with its material, completely define a particular structure. These are: Size, Schema, Slenderness or Proportion, and Thickness. Galileo (1638) postulated the existence of an insurmountable size for structural problems—the size for which a structure with a given schema and a given slenderness, is only able to resist its self-weight—. Such size, or structural scope will be different for every different used material; the only needed information about the material to determine such size is the ratio between its allowable stress and its specific weight: a characteristic length that we name material structural scope. The definition of efficiency given above is not useful for structures that have a small size in comparison with the insurmountable size. In this case—structures with null size, inwhich the self-weight is negligible in comparisonwith the useful load—we use as measure of the cost the dimensionless magnitude that we call Michell’s number, an amount derived from the “quantity” introduced by A. G. M. Michell in his seminal article published in 1904, developed out of a result from J. C.Maxwell of 1870. R. Aroca joined the theories of Galileo and the theories of Maxwell and Michell, obtaining some design rules of direct application (that we denominate “GA rule”), that allow the estimation of the structural scope and the efficiency of a structural schema. In this work the efficiency of truss-like structures resolving bending problems is studied, taking into consideration the influence of the size. On the one hand, in the case of structures with null size, near-optimal layouts are explored using several minimization methods, in order to obtain forms with cost near to the absolute optimum but with a significant reduction of the complexity. On the other hand, a method for the determination of the insurmountable size for truss-like structures is shown, having into account local bending effects. The results are checked with the GA rule, showing the conditions in which it is applicable. Finally, some directions for future research are proposed: the measure of the complexity, the cost of foundations and the extension of optimization methods having into account the self-weight.
Resumo:
Activation of genes by heavy metals, notably zinc, cadmium and copper, depends on MTF-1, a unique zinc finger transcription factor conserved from insects to human. Knockout of MTF-1 in the mouse results in embryonic lethality due to liver decay, while knockout of its best characterized target genes, the stress-inducible metallothionein genes I and II, is viable, suggesting additional target genes of MTF-1. Here we report on a multi-pronged search for potential target genes of MTF-1, including microarray screening, SABRE selective amplification, a computer search for MREs (DNA-binding sites of MTF-1) and transfection of reporter genes driven by candidate gene promoters. Some new candidate target genes emerged, including those encoding α-fetoprotein, the liver-enriched transcription factor C/EBPα and tear lipocalin/von Ebner’s gland protein, all of which have a role in toxicity/the cell stress response. In contrast, expression of other cell stress-associated genes, such as those for superoxide dismutases, thioredoxin and heat shock proteins, do not appear to be affected by loss of MTF-1. Our experiments have also exposed some problems with target gene searches. First, finding the optimal time window for detecting MTF-1 target genes in a lethal phenotype of rapid liver decay proved problematical: 12.5-day-old mouse embryos (stage E12.5) yielded hardly any differentially expressed genes, whereas at stage 13.0 reduced expression of secretory liver proteins probably reflected the onset of liver decay, i.e. a secondary effect. Likewise, up-regulation of some proliferation-associated genes may also just reflect responses to the concomitant loss of hepatocytes. Another sobering finding concerns γ-glutamylcysteine synthetasehc (γ-GCShc), which controls synthesis of the antioxidant glutathione and which was previously suggested to be a target gene contributing to the lethal phenotype in MTF-1 knockout mice. γ-GCShc mRNA is reduced at the onset of liver decay but MTF-1 null mutant embryos manage to maintain a very high glutathione level until shortly before that stage, perhaps in an attempt to compensate for low expression of metallothioneins, which also have a role as antioxidants.
Resumo:
Included are 157 references to reports, journals, and other literature related to the problems and techniques of tritium handling. The subject scope embraces analytical and monitoring procedures and instruments, physiological effects, and safety measures and standards.
Resumo:
Mode of access: Internet.
Resumo:
Includes bibliographical references and index.
Resumo:
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.
Resumo:
In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.
Resumo:
The goal of semantic search is to improve on traditional search methods by exploiting the semantic metadata. In this paper, we argue that supporting iterative and exploratory search modes is important to the usability of all search systems. We also identify the types of semantic queries the users need to make, the issues concerning the search environment and the problems that are intrinsic to semantic search in particular. We then review the four modes of user interaction in existing semantic search systems, namely keyword-based, form-based, view-based and natural language-based systems. Future development should focus on multimodal search systems, which exploit the advantages of more than one mode of interaction, and on developing the search systems that can search heterogeneous semantic metadata on the open semantic Web.
Resumo:
Background Qualitative research makes an important contribution to our understanding of health and healthcare. However, qualitative evidence can be difficult to search for and identify, and the effectiveness of different types of search strategies is unknown. Methods Three search strategies for qualitative research in the example area of support for breast-feeding were evaluated using six electronic bibliographic databases. The strategies were based on using thesaurus terms, free-text terms and broad-based terms. These strategies were combined with recognised search terms for support for breast-feeding previously used in a Cochrane review. For each strategy, we evaluated the recall (potentially relevant records found) and precision (actually relevant records found). Results A total yield of 7420 potentially relevant records was retrieved by the three strategies combined. Of these, 262 were judged relevant. Using one strategy alone would miss relevant records. The broad-based strategy had the highest recall and the thesaurus strategy the highest precision. Precision was generally poor: 96% of records initially identified as potentially relevant were deemed irrelevant. Searching for qualitative research involves trade-offs between recall and precision. Conclusions These findings confirm that strategies that attempt to maximise the number of potentially relevant records found are likely to result in a large number of false positives. The findings also suggest that a range of search terms is required to optimise searching for qualitative evidence. This underlines the problems of current methods for indexing qualitative research in bibliographic databases and indicates where improvements need to be made.