973 resultados para Tabu search algorithms
Resumo:
The SOS screen, as originally described by Perkins et al. (1999) [7], was setup with the aim of identifying Arabidopsis functions that might potentially be involved in the DNA metabolism. Such functions, when expressed in bacteria, are prone to disturb replication and thus trigger the SOS response. Consistently, expression of AtRAD51 and AtDMC1 induced the SOS response in bacteria, even affecting E. coli viability. 100 SOS-inducing cDNAs were isolated from a cDNA library constructed from an Arabidopsis cell suspension that was found to highly express meiotic genes. A large proportion of these SOS(+) candidates are clearly related to the DNA metabolism, others could be involved in the RNA metabolism, while the remaining cDNAs encode either totally unknown proteins or proteins that were considered as irrelevant. Seven SOS(+) candidate genes are induced following gamma irradiation. The in planta function of several of the SOS-inducing clones was investigated using T-DNA insertional mutants or RNA interference. Only one SOS(+) candidate, among those examined, exhibited a defined phenotype: silenced plants for DUT1 were sensitive to 5-fluoro-uracil (5FU), as is the case of the leaky dut-1 mutant in E. coli that are affected in dUTPase activity. dUTPase is essential to prevent uracil incorporation in the course of DNA replication.
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
We estimate the attainable limits on the coupling of a nonstandard Higgs boson to two photons taking into account the data collected by the Fermilab collaborations on diphoton events. We based our analysis on a general set of dimension-6 effective operators that give rise to anomalous couplings in the bosonic sector of the standard model. If the coefficients of all blind operators have the same magnitude, indirect bounds on the anomalous triple vector-boson couplings can also be inferred, provided there is no large cancellation in the Higgs-gamma-gamma coupling.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The aim of this article is to show how a contemporary playwright thinks once more of the Platonic image of the cave in order to reflect on the necessary existential journey of men and women as in the case of a Bildungsroman. Sooner or later men and women must abandon the protection that any sort of cavern such as home, the family garden or family itself can offer. In spite of writing from a by no means idealistic or metaphysical point of view, thanks to R. Sirera and to the very applicability of Platonic images, Plato becomes once again a classical reference which is both useful and even unavoidable if one bears in mind the Platonic origin of all the literary caverns.
Resumo:
Optimizing collective behavior in multiagent systems requires algorithms to find not only appropriate individual behaviors but also a suitable composition of agents within a team. Over the last two decades, evolutionary methods have emerged as a promising approach for the design of agents and their compositions into teams. The choice of a crossover operator that facilitates the evolution of optimal team composition is recognized to be crucial, but so far, it has never been thoroughly quantified. Here, we highlight the limitations of two different crossover operators that exchange entire agents between teams: restricted agent swapping (RAS) that exchanges only corresponding agents between teams and free agent swapping (FAS) that allows an arbitrary exchange of agents. Our results show that RAS suffers from premature convergence, whereas FAS entails insufficient convergence. Consequently, in both cases, the exploration and exploitation aspects of the evolutionary algorithm are not well balanced resulting in the evolution of suboptimal team compositions. To overcome this problem, we propose combining the two methods. Our approach first applies FAS to explore the search space and then RAS to exploit it. This mixed approach is a much more efficient strategy for the evolution of team compositions compared to either strategy on its own. Our results suggest that such a mixed agent-swapping algorithm should always be preferred whenever the optimal composition of individuals in a multiagent system is unknown.
Resumo:
This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study.
Resumo:
The problem of searchability in decentralized complex networks is of great importance in computer science, economy, and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and we obtain expressions for the average search cost both in the presence and the absence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: starlike configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when it is large.
Resumo:
The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.
Resumo:
Introduction: Targeted intrathecal drug infusion to treat moderate to severe chronic pain has become a standard part of treatment algorithms when more conservative options fail. This therapy is well established in the literature, has shown efficacy, and is an important tool for the treatment of both cancer and noncancer pain; however, it has become clear in recent years that intrathecal drug delivery is associated with risks for serious morbidity and mortality. Methods: The Polyanalgesic Consensus Conference is a meeting of experienced implanting physicians who strive to improve care in those receiving implantable devices. Employing data generated through an extensive literature search combined with clinical experience, this work group formulated recommendations regarding awareness, education, and mitigation of the morbidity and mortality associated with intrathecal therapy to establish best practices for targeted intrathecal drug delivery systems. Results: Best practices for improved patient care and outcomes with targeted intrathecal infusion are recommended to minimize the risk of morbidity and mortality. Areas of focus include respiratory depression, infection, granuloma, device-related complications, endocrinopathies, and human error. Specific guidance is given with each of these issues and the general use of the therapy. Conclusions: Targeted intrathecal drug delivery systems are associated with risks for morbidity and mortality that can be devastating. The panel has given guidance to treating physicians and healthcare providers to reduce the incidence of these problems and to improve outcomes when problems occur.
Resumo:
BACKGROUND: Small RNAs (sRNAs) are widespread among bacteria and have diverse regulatory roles. Most of these sRNAs have been discovered by a combination of computational and experimental methods. In Pseudomonas aeruginosa, a ubiquitous Gram-negative bacterium and opportunistic human pathogen, the GacS/GacA two-component system positively controls the transcription of two sRNAs (RsmY, RsmZ), which are crucial for the expression of genes involved in virulence. In the biocontrol bacterium Pseudomonas fluorescens CHA0, three GacA-controlled sRNAs (RsmX, RsmY, RsmZ) regulate the response to oxidative stress and the expression of extracellular products including biocontrol factors. RsmX, RsmY and RsmZ contain multiple unpaired GGA motifs and control the expression of target mRNAs at the translational level, by sequestration of translational repressor proteins of the RsmA family. RESULTS: A combined computational and experimental approach enabled us to identify 14 intergenic regions encoding sRNAs in P. aeruginosa. Eight of these regions encode newly identified sRNAs. The intergenic region 1698 was found to specify a novel GacA-controlled sRNA termed RgsA. GacA regulation appeared to be indirect. In P. fluorescens CHA0, an RgsA homolog was also expressed under positive GacA control. This 120-nt sRNA contained a single GGA motif and, unlike RsmX, RsmY and RsmZ, was unable to derepress translation of the hcnA gene (involved in the biosynthesis of the biocontrol factor hydrogen cyanide), but contributed to the bacterium's resistance to hydrogen peroxide. In both P. aeruginosa and P. fluorescens the stress sigma factor RpoS was essential for RgsA expression. CONCLUSION: The discovery of an additional sRNA expressed under GacA control in two Pseudomonas species highlights the complexity of this global regulatory system and suggests that the mode of action of GacA control may be more elaborate than previously suspected. Our results also confirm that several GGA motifs are required in an sRNA for sequestration of the RsmA protein.
Resumo:
This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.