891 resultados para choice of partner
Resumo:
The objective of this study was to compare the BLUP selection method with different selection strategies in F-2:4 and assess the efficiency of this method on the early choice of the best common bean (Phaseolus vulgaris) lines. Fifty-one F-2:4 progenies were produced from a cross between the CVIII8511 x RP-26 lines. A randomized block design was used with 20 replications and one-plant field plots. Character data on plant architecture and grain yield were obtained and then the sum of the standardized variables was estimated for simultaneous selection of both traits. Analysis was carried out by mixed models (BLUP) and the least squares method to compare different selection strategies, like mass selection, stratified mass selection and between and within progeny selection. The progenies selected by BLUP were assessed in advanced generations, always selecting the greatest and smallest sum of the standardized variables. Analyses by the least squares method and BLUP procedure ranked the progenies in the same way. The coincidence of the individuals identified by BLUP and between and within progeny selection was high and of the greatest magnitude when BLUP was compared with mass selection. Although BLUP is the best estimator of genotypic value, its efficiency in the response to long term selection is not different from any of the other methods, because it is also unable to predict the future effect of the progenies x environments interaction. It was inferred that selection success will always depend on the most accurate possible progeny assessment and using alternatives to reduce the progenies x environments interaction effect.
Resumo:
A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.
Resumo:
Ruthenium complexes including nitrosyl or nitrite complexes are particularly interesting because they can not only scavenge but also release nitric oxide in a controlled manner, regulating the NO-level in vivo. The judicious choice of ligands attached to the [RuNO] core has been shown to be a suitable strategy to modulate NO reactivity in these complexes. In order to understand the influence of different equatorial ligands on the electronic structure of the Ru-NO chemical bonding, and thus on the reactivity of the coordinated NO, we propose an investigation of the nature of the Ru-NO chemical bond by means of energy decomposition analysis (EDA), considering tetraamine and tetraazamacrocycles as equatorial ligands, prior to and after the reduction of the {RuNO}(6) moiety by one electron. This investigation provides a deep insight into the Ru-NO bonding situation, which is fundamental in designing new ruthenium nitrosyl complexes with potential biological applications.
Resumo:
The predictions of Drell-Yan production of low-mass lepton pairs, at high rapidity at the LHC, are known to depend sensitively on the choice of factorization and renormalization scales. We show how this sensitivity can be greatly reduced by fixing the factorization scale of the LO contribution based on the known NLO matrix element, so that observations of this process at the LHC can make direct measurements of parton distribution functions in the low x domain; x less than or similar to 10(-4).
Resumo:
Colorectal cancer (CRC) is the most common tumour type in both sexes combined in Western countries. Although screening programmes including the implementation of faecal occult blood test and colonoscopy might be able to reduce mortality by removing precursor lesions and by making diagnosis at an earlier stage, the burden of disease and mortality is still high. Improvement of diagnostic and treatment options increased staging accuracy, functional outcome for early stages as well as survival. Although high quality surgery is still the mainstay of curative treatment, the management of CRC must be a multi-modal approach performed by an experienced multi-disciplinary expert team. Optimal choice of the individual treatment modality according to disease localization and extent, tumour biology and patient factors is able to maintain quality of life, enables long-term survival and even cure in selected patients by a combination of chemotherapy and surgery. Treatment decisions must be based on the available evidence, which has been the basis for this consensus conference-based guideline delivering a clear proposal for diagnostic and treatment measures in each stage of rectal and colon cancer and the individual clinical situations. This ESMO guideline is recommended to be used as the basis for treatment and management decisions.
Resumo:
Copper complexes with fluorinated beta-diketones were synthesized and characterized in terms of lipophilicity and peroxide-assisted oxidation of dihydrorhodamine as an indicator of redox activity. The biological activity of the complexes was tested against promastigotes of Leishmania amazonensis. Inhibition of trypanosomatid-specific trypanothione reductase was also tested. It was found that the highly lipophilic and redox-active bis(trifluoroacetylacetonate) derivative had increased toxicity towards promastigotes. These results indicate that it is possible to modulate the activity of metallodrugs based on redox-active metals through the appropriate choice of lipophilic chelators in order to design new antileishmanials. Further work will be necessary to improve selectivity of these compounds against the parasite.
Resumo:
Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.
Resumo:
CONTEXT: About 9% of the Brazilian population has gallstones and the incidence increases significantly with aging. The choledocholithiasis is found around 15% of these patients, and a third to half of these cases presented as asymptomatic. Once the lithiasis in the common bile duct is characterized through intraoperative cholangiography, the laparoscopic surgical exploration can be done through the transcystic way or directly through choledochotomy. OBJECTIVE: To evaluate the results and outcomes of the laparoscopic treatment of common bile duct lithiasis. METHODS: Seventy consecutive patients were evaluated. They prospectively underwent the treatment of the lithiasis in the common bile duct and the exploration ways were compared according to the following parameters: criteria on their indication, success in the clearance, surgical complications. It was verified that about ½ of the choledocholithiasis carriers did not show any expression of predictive factors (clinical antecedents of jaundice and/or acute pancreatitis, compatible sonographic data and the pertaining lab tests). The laparoscopic exploration through the transcystic way is favored when there are no criteria for the practice of primary choledochotomy, which are: lithiasis in the proximal bile duct, large (over 8 mm) or numerous calculi (multiple calculosis). RESULTS: The transcystic way was employed in about 50% of the casuistic and the choledochotomy in about 30%. A high success rate (around 80%) was achieved in the clearance of the common bile duct stones through laparoscopic exploration. The transcystic way, performed without fluoroscopy or choledochoscopy, attained a low rate of success (around 45%), being 10% of those by transpapilar pushing of calculi less than 3 mm. The exploration through choledochotomy, either primary or secondary, if the latter was performed after the transcystic route failure, showed high success rate (around 95%). When the indication to choledochotomy was primary, the necessity for choledochoscopy through choledochotomy to help in the removal of the calculi was 55%. However, when choledochotomy was performed secondarily, in situations where the common bile duct diameter was larger than 6 mm, the use of choledochoscopy with the same purpose involved about 20% of the cases. There was no mortality in this series. CONCLUSION: The laparoscopic exploration of the common bile duct was related to a low rate of morbidity. Therefore, the use of laparoscopy for the treatment of the lithiasis in the common bile duct depends on the criteria for the choice of the best access, making it a safe procedure with very good results.
Resumo:
The theoretical framework that underpins this research study is based on the Prospect Theory formulated by Kahneman and Tversky, and Thaler's Mental Accounting Theory. The research aims to evaluate the consumers' behavior when different patterns of discount are offered (in percentage and absolute value and for larger and smaller discounts). Two experiments were conducted to explore these patterns of behavior and the results that were obtained supported the view that the framing effect was a common occurrence. The patterns of choice of individuals in a sample were found to be different due to changes in the ways discounts were offered. This can be explained by the various ways of presenting discount rates that had an impact on the influence of purchase intentions, recommendations and quality perception.
Resumo:
In this communication we report results from the application to the study of the rotation of the Moon of the creeping tide theory just proposed (Ferraz-Mello, Cel. Mech. Dyn. Astron., submitted. ArXiv astro-ph 1204.3957). The choice of the Moon for the first application of this new theory is motivated by the fact that the Moon is one of the best observed celestial bodies and the comparison of the theoretical predictions of the theory with observations i may validate the theory or point out the need of further improvements. Particularly, the tidal perturbations of the rotation of the Moon - the physical libration of the Moon - have been detected in the Lunar Laser Ranging measurements (Williams et al. JGR 106, 27933, 2001). The major difficulty in this application comes from the fact that tidal torques in a planet-satellite system are very sensitive to the distance between the two-bodies, which is strongly affected by Solar perturbations. In the case of the Moon, the main solar perturbations - the Evection and the Variation - are more important than most of the Keplerian oscillations, being smaller only than the first Keplerian harmonic (equation of the centre). Besides, two of the three components of the Moon's libration in longitude whose tidal contributions were determined by LLR are related to these perturbations. The results may allow us to determine the main parameter of a possible Moon's creeping tide. The preliminary results point to a relaxation factor (gamma) 2 to 4 times smaller than the one predicted from the often cited values of thr Moon's quality factor Q (between 30 and 40), and points to larger Q values.
Resumo:
Brazil is expected to have 19.6 million patients with diabetes by the year 2030. A key concept in the treatment of type 2 diabetes mellitus (T2DM) is establishing individualized glycemic goals based on each patient’s clinical characteristics, which impact the choice of antihyperglycemic therapy. Targets for glycemic control, including fasting blood glucose, postprandial blood glucose, and glycated hemoglobin (A1C), are often not reached solely with antihyperglycemic therapy, and insulin therapy is often required. Basal insulin is considered an initial strategy; however, premixed insulins are convenient and are equally or more effective, especially for patients who require both basal and prandial control but desire a more simplified strategy involving fewer daily injections than a basal-bolus regimen. Most physicians are reluctant to transition patients to insulin treatment due to inappropriate assumptions and insufficient information. We conducted a nonsystematic review in PubMed and identified the most relevant and recently published articles that compared the use of premixed insulin versus basal insulin analogues used alone or in combination with rapid-acting insulin analogues before meals in patients with T2DM. These studies suggest that premixed insulin analogues are equally or more effective in reducing A1C compared to basal insulin analogues alone in spite of the small increase in the risk of nonsevere hypoglycemic events and nonclinically significant weight gain. Premixed insulin analogues can be used in insulin-naïve patients, in patients already on basal insulin therapy, and those using basal-bolus therapy who are noncompliant with blood glucose self-monitoring and titration of multiple insulin doses. We additionally provide practical aspects related to titration for the specific premixed insulin analogue formulations commercially available in Brazil.
Resumo:
An important feature in computer systems developed for the agricultural sector is to satisfy the heterogeneity of data generated in different processes. Most problems related with this heterogeneity arise from the lack of standard for different computing solutions proposed. An efficient solution for that is to create a single standard for data exchange. The study on the actual process involved in cotton production was based on a research developed by the Brazilian Agricultural Research Corporation (EMBRAPA) that reports all phases as a result of the compilation of several theoretical and practical researches related to cotton crop. The proposition of a standard starts with the identification of the most important classes of data involved in the process, and includes an ontology that is the systematization of concepts related to the production of cotton fiber and results in a set of classes, relations, functions and instances. The results are used as a reference for the development of computational tools, transforming implicit knowledge into applications that support the knowledge described. This research is based on data from the Midwest of Brazil. The choice of the cotton process as a study case comes from the fact that Brazil is one of the major players and there are several improvements required for system integration in this segment.
Resumo:
The subject of this doctoral dissertation concerns the definition of a new methodology for the morphological and morphometric study of fossilized human teeth, and therefore strives to provide a contribution to the reconstruction of human evolutionary history that proposes to extend to the different species of hominid fossils. Standardized investigative methodologies are lacking both regarding the orientation of teeth subject to study and in the analysis that can be carried out on these teeth once they are oriented. The opportunity to standardize a primary analysis methodology is furnished by the study of certain early Neanderthal and preneanderthal molars recovered in two caves in southern Italy [Grotta Taddeo (Taddeo Cave) and Grotta del Poggio (Poggio Cave), near Marina di Camerata, Campania]. To these we can add other molars of Neanderthal and modern man of the upper Paleolithic era, specifically scanned in the paleoanthropology laboratory of the University of Arkansas (Fayetteville, Arkansas, USA), in order to increase the paleoanthropological sample data and thereby make the final results of the analyses more significant. The new analysis methodology is rendered as follows: 1. Standardization of an orientation system for primary molars (superior and inferior), starting from a scan of a sample of 30 molars belonging to modern man (15 M1 inferior and 15 M1 superior), the definition of landmarks, the comparison of various systems and the choice of a system of orientation for each of the two dental typologies. 2. The definition of an analysis procedure that considers only the first 4 millimeters of the dental crown starting from the collar: 5 sections parallel to the plane according to which the tooth has been oriented are carried out, spaced 1 millimeter between them. The intention is to determine a method that allows for the differentiation of fossilized species even in the presence of worn teeth. 3. Results and Conclusions. The new approach to the study of teeth provides a considerable quantity of information that can better be evaluated by increasing the fossil sample data. It has been demonstrated to be a valid tool in evolutionary classification that has allowed (us) to differentiate the Neanderthal sample from that of modern man. In a particular sense the molars of Grotta Taddeo, which up until this point it has not been possible to determine with exactness their species of origin, through the present research they are classified as Neanderthal.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.