992 resultados para Distance convex simple graphs
Resumo:
Många kvantitativa problem från vitt skilda områden kan beskrivas som optimeringsproblem. Ett mått på lösningens kvalitet bör optimeras samtidigt som vissa villkor på lösningen uppfylls. Kvalitetsmåttet kallas vanligen objektfunktion och kan beskriva kostnader (exempelvis produktion, logistik), potentialenergi (molekylmodellering, proteinveckning), risk (finans, försäkring) eller något annat relevant mått. I min doktorsavhandling diskuteras speciellt icke-linjär programmering, NLP, i ändliga dimensioner. Problem med enkel struktur, till exempel någon form av konvexitet, kan lösas effektivt. Tyvärr kan inte alla kvantitativa samband modelleras på ett konvext vis. Icke-konvexa problem kan angripas med heuristiska metoder, algoritmer som söker lösningar med hjälp av deterministiska eller stokastiska tumregler. Ibland fungerar det här väl, men heuristikerna kan sällan garantera kvaliteten på lösningen eller ens att en lösning påträffas. För vissa tillämpningar är det här oacceptabelt. Istället kan man tillämpa så kallad global optimering. Genom att successivt dela variabeldomänen i mindre delar och beräkna starkare gränser på det optimala värdet hittas en lösning inom feltoleransen. Den här metoden kallas branch-and-bound, ungefär dela-och-begränsa. För att ge undre gränser (vid minimering) approximeras problemet med enklare problem, till exempel konvexa, som kan lösas effektivt. I avhandlingen studeras tillvägagångssätt för att approximera differentierbara funktioner med konvexa underskattningar, speciellt den så kallade alphaBB-metoden. Denna metod adderar störningar av en viss form och garanterar konvexitet genom att sätta villkor på den perturberade Hessematrisen. Min forskning har lyft fram en naturlig utvidgning av de perturbationer som används i alphaBB. Nya metoder för att bestämma underskattningsparametrar har beskrivits och jämförts. I sammanfattningsdelen diskuteras global optimering ur bredare perspektiv på optimering och beräkningsalgoritmer.
Resumo:
In simple terms, a phytosociological survey is a group of ecological evaluation methods whose aim is to provide a comprehensive overview of both the composition and distribution of plant species in a given plant community. To understand the applicability of phytosociological surveys for weed science, as well as their validity, their ecological basis should be understood and the most suitable ones need to be chosen, because cultivated fields present a relatively distinct group of selecting factors when compared to natural plant communities. For weed science, the following sequence of steps is proposed as the most suitable: (1) overall infestation; (2) phytosociological tables/graphs; (3) intra-characterization by diversity; (4) inter-characterization and grouping by cluster analysis. A summary of methods is established in order to assist Weed Science researchers through their steps into the realm of phytosociology.
Resumo:
Two experiments were carried out to evaluate the initial plant growth of Eucalyptus urograndis growing in coexistence with Urochloa decumbens and U. ruziziensis. In 100-L box, one plant of U. decumbens or U. ruziziensis grew in coexistence with one plant of E. urograndis clones C219H or H15, respectively, in the distances of 0, 5, 10, 15, 20, 25, 30, 35, and 40 cm from the crop. After 30, 60, 90 (both clones), and 150 days (just for H15), growth characteristics were evaluated. Plants of both clones, growing in weed-free situations, showed a better growth and development than plants that grew in weedy situations, independently of the distance, having the highest plant height, stem diameter, dry mass of stem, and dry mass of leaves. As the same way, the number of branches, number of leaves, and leaf area of the clone C219H were similarly affected. Urochloa ruziziensis reduced the dry mass accumulation of stem and leaves by the rate of 0.06 and 0.32 g per plant, respectively, per each centimeter growing nearest to the crop, while U. decumbens reduced by 0.03 and 0.14 g per plant. The interference of U. decumbens and U. ruziziensis with E. urograndis is more intense when weedy plants grow in short distances from the crop.
Resumo:
A subshift is a set of in nite one- or two-way sequences over a xed nite set, de ned by a set of forbidden patterns. In this thesis, we study subshifts in the topological setting, where the natural morphisms between them are ones de ned by a (spatially uniform) local rule. Endomorphisms of subshifts are called cellular automata, and we call the set of cellular automata on a subshift its endomorphism monoid. It is known that the set of all sequences (the full shift) allows cellular automata with complex dynamical and computational properties. We are interested in subshifts that do not support such cellular automata. In particular, we study countable subshifts, minimal subshifts and subshifts with additional universal algebraic structure that cellular automata need to respect, and investigate certain criteria of `simplicity' of the endomorphism monoid, for each of them. In the case of countable subshifts, we concentrate on countable so c shifts, that is, countable subshifts de ned by a nite state automaton. We develop some general tools for studying cellular automata on such subshifts, and show that nilpotency and periodicity of cellular automata are decidable properties, and positive expansivity is impossible. Nevertheless, we also prove various undecidability results, by simulating counter machines with cellular automata. We prove that minimal subshifts generated by primitive Pisot substitutions only support virtually cyclic automorphism groups, and give an example of a Toeplitz subshift whose automorphism group is not nitely generated. In the algebraic setting, we study the centralizers of CA, and group and lattice homomorphic CA. In particular, we obtain results about centralizers of symbol permutations and bipermutive CA, and their connections with group structures.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. According to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estimation accuracy.
A simple model for the estimation of congenital malformation frequency in racially mixed populations
Resumo:
A simple model is proposed, using the method of maximum likelihood to estimate malformation frequencies in racial groups based on data obtained from hospital services. This model uses the proportions of racial admixture, and the observed malformation frequency. It was applied to two defects: postaxial polydactyly and cleft lip, the frequencies of which are recognizedly heterogeneous among racial groups. The frequencies estimated in each racial group were those expected for these malformations, which proves the applicability of the method.
Resumo:
It is well known that saccadic reaction times (SRT) are reduced when the target is preceded by the offset of the fixation point (FP) - the gap effect. Some authors have proposed that the FP offset also allows the saccadic system to generate a separate population of SRT, the express saccades. Nevertheless, there is no agreement as to whether the gap effect and express responses are also present for manual reaction times (MRT). We tested the gap effect and the MRT distribution in two different conditions, i.e., simple and choice MRT. In the choice MRT condition, subjects need to identify the side of the stimulus and to select the appropriate response, while in the simple MRT these stages are not necessary. We report that the gap effect was present in both conditions (22 ms for choice MRT condition; 15 ms for simple MRT condition), but, when analyzing the MRT distributions, we did not find any clear evidence for express manual responses. The main difference in MRT distribution between simple and choice conditions was a shift towards shorter values for simple MRT.
Resumo:
Vertebrate gap junctions are aggregates of transmembrane channels which are composed of connexin (Cx) proteins encoded by at least fourteen distinct genes in mammals. Since the same Cx type can be expressed in different tissues and more than one Cx type can be expressed by the same cell, the thorough identification of which connexin is in which cell type and how connexin expression changes after experimental manipulation has become quite laborious. Here we describe an efficient, rapid and simple method by which connexin type(s) can be identified in mammalian tissue and cultured cells using endonuclease cleavage of RT-PCR products generated from "multi primers" (sense primer, degenerate oligonucleotide corresponding to a region of the first extracellular domain; antisense primer, degenerate oligonucleotide complementary to the second extracellular domain) that amplify the cytoplasmic loop regions of all known connexins except Cx36. In addition, we provide sequence information on RT-PCR primers used in our laboratory to screen individual connexins and predictions of extension of the "multi primer" method to several human connexins.
Resumo:
Polymerase chain reaction (PCR) has been widely investigated for the diagnosis of tuberculosis. However, before this technique is applied on clinical samples, it needs to be well standardized. We describe the use of McFarland nephelometer, a very simple approach to determine microorganism concentration in solution, for PCR standardization and DNA quantitation, using Mycobacterium tuberculosis as a model. Tuberculosis is an extremely important disease for the public health system in developing countries and, with the advent of AIDS, it has also become an important public health problem in developed countries. Using Mycobacterium tuberculosis as a research model, we were able to detect 3 M. tuberculosis genomes using the McFarland nephelometer to assess micobacterial concentration. We have shown here that McFarland nephelometer is an easy and reliable procedure to determine PCR sensitivity at lower costs.
Resumo:
To assess the clinical relevance of a semi-quantitative measurement of human cytomegalovirus (HCMV) DNA in renal transplant recipients within the typical clinical context of a developing country where virtually 100% of both receptors and donors are seropositive for this virus, we have undertaken HCMV DNA quantification using a simple, semi-quantitative, limiting dilution polymerase chain reaction (PCR). We evaluated this assay prospectively in 52 renal transplant patients from whom a total of 495 serial blood samples were collected. The samples scored HCMV positive by qualitative PCR had the levels of HCMV DNA determined by end-point dilution-PCR. All patients were HCMV DNA positive during the monitoring period and a diagnosis of symptomatic infection was made for 4 of 52 patients. In symptomatic patients the geometric mean of the highest level of HCMV DNAemia was 152,000 copies per 106 leukocytes, while for the asymptomatic group this value was 12,050. Symptomatic patients showed high, protracted HCMV DNA levels, whereas asymptomatic patients demonstrated intermittent low or moderate levels. Using a cut-off value of 100,000 copies per 106 leukocytes, the limiting dilution assay had sensitivity of 100%, specificity of 92%, a positive predictive value of 43% and a negative predictive value of 100% for HCMV disease. In this patient group, there was universal HCMV infection but relatively infrequent symptomatic HCMV disease. The two patient groups were readily distinguished by monitoring with the limiting dilution assay, an extremely simple technology immediately applicable in any clinical laboratory with PCR capability.
Resumo:
R,S-sotalol, a ß-blocker drug with class III antiarrhythmic properties, is prescribed to patients with ventricular, atrial and supraventricular arrhythmias. A simple and sensitive method based on HPLC-fluorescence is described for the quantification of R,S-sotalol racemate in 500 µl of plasma. R,S-sotalol and its internal standard (atenolol) were eluted after 5.9 and 8.5 min, respectively, from a 4-micron C18 reverse-phase column using a mobile phase consisting of 80 mM KH2PO4, pH 4.6, and acetonitrile (95:5, v/v) at a flow rate of 0.5 ml/min with detection at lex = 235 nm and lem = 310 nm, respectively. This method, validated on the basis of R,S-sotalol measurements in spiked blank plasma, presented 20 ng/ml sensitivity, 20-10,000 ng/ml linearity, and 2.9 and 4.8% intra- and interassay precision, respectively. Plasma sotalol concentrations were determined by applying this method to investigate five high-risk patients with atrial fibrillation admitted to the Emergency Service of the Medical School Hospital, who received sotalol, 160 mg po, as loading dose. Blood samples were collected from a peripheral vein at zero, 0.5, 1.0, 1.5, 2.0, 3.0, 4.0, 6.0, 8.0, 12.0 and 24.0 h after drug administration. A two-compartment open model was applied. Data obtained, expressed as mean, were: CMAX = 1230 ng/ml, TMAX = 1.8 h, AUCT = 10645 ng h-1 ml-1, Kab = 1.23 h-1, a = 0.95 h-1, ß = 0.09 h-1, t(1/2)ß = 7.8 h, ClT/F = 3.94 ml min-1 kg-1, and Vd/F = 2.53 l/kg. A good systemic availability and a fast absorption were obtained. Drug distribution was reduced to the same extent in terms of total body clearance when patients and healthy volunteers were compared, and consequently elimination half-life remained unchanged. Thus, the method described in the present study is useful for therapeutic drug monitoring purposes, pharmacokinetic investigation and pharmacokinetic-pharmacodynamic sotalol studies in patients with tachyarrhythmias.
Resumo:
We have developed a system with two epi-illumination sources, a DC-regulated lamp for transillumination and mechanical switches for rapid shift of illumination and detection of defined areas (250-750 µm²) by fluorescence and phosphorescence videomicroscopy. The system permits investigation of standard microvascular parameters, vascular permeability as well as intra- and extravascular PO2 by phosphorescence quenching of Pd-meso-tetra (4-carboxyphenyl) porphine (PORPH). A Pechan prism was used to position a defined region over the photomultiplier and TV camera. In order to validate the system for in vivo use, in vitro tests were performed with probes at concentrations that can be found in microvascular studies. Extensive in vitro evaluations were performed by filling glass capillaries with solutions of various concentrations of FITC-dextran (diluted in blood and in saline) mixed with different amounts of PORPH. Fluorescence intensity and phosphorescence decay were determined for each mixture. FITC-dextran solutions without PORPH and PORPH solutions without FITC-dextran were used as references. Phosphorescence decay curves were relatively unaffected by the presence of FITC-dextran at all concentrations tested (0.1 µg/ml to 5 mg/ml). Likewise, fluorescence determinations were performed in the presence of PORPH (0.05 to 0.5 mg/ml). The system was successfully used to study macromolecular extravasation and PO2 in the rat mesentery circulation under controlled conditions and during ischemia-reperfusion.
Resumo:
The Christo Inventory for Substance-Misuse Services (CISS) is a single page outcome evaluation tool completed by drug alcohol service workers either on the basis of direct client interviews or of personal experience of their client supplemented by existing assessment notes. It was developed to assist substance misuse services to empirically demonstrate the effectiveness of their treatments to their respective funding bodies. Its 0 to 20 unidimensional scale consists of 10 items reflecting clients' problems with social functioning, general health, sexual/injecting risk behavior, psychological functioning, occupation, criminal involvement, drug/alcohol use, ongoing support, compliance, and working relationships. Good reliability and validity has already been demonstrated for the CISS [Christo et al., Drug and Alcohol Dependence 2000; 59: 189-197] but the original was written in English and a Portuguese version is presented here. The present review explores its applicability to a Brazilian setting, summarizes its characteristics and uses, and describes the process of translation to Portuguese. A pilot study conducted in a substance misuse service for adolescents indicated it is likely to be suitable for use among a Brazilian population. The simplicity, flexibility and brevity of the CISS make it a useful tool allowing comparison of clients within and between many different service settings.