974 resultados para D-Set


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose This study evaluated the impact of patient set-up errors on the probability of pulmonary and cardiac complications in the irradiation of left-sided breast cancer. Methods and Materials Using the CMS XiO Version 4.6 (CMS Inc., St Louis, MO) radiotherapy planning system's NTCP algorithm and the Lyman -Kutcher-Burman (LKB) model, we calculated the DVH indices for the ipsilateral lung and heart and the resultant normal tissue complication probabilities (NTCP) for radiation-induced pneumonitis and excess cardiac mortality in 12 left-sided breast cancer patients. Results Isocenter shifts in the posterior direction had the greatest effect on the lung V20, heart V25, mean and maximum doses to the lung and the heart. Dose volume histograms (DVH) results show that the ipsilateral lung V20 tolerance was exceeded in 58% of the patients after 1cm posterior shifts. Similarly, the heart V25 tolerance was exceeded after 1cm antero-posterior and left-right isocentric shifts in 70% of the patients. The baseline NTCPs for radiation-induced pneumonitis ranged from 0.73% - 3.4% with a mean value of 1.7%. The maximum reported NTCP for radiation-induced pneumonitis was 5.8% (mean 2.6%) after 1cm posterior isocentric shift. The NTCP for excess cardiac mortality were 0 % in 100% of the patients (n=12) before and after setup error simulations. Conclusions Set-up errors in left sided breast cancer patients have a statistically significant impact on the Lung NTCPs and DVH indices. However, with a central lung distance of 3cm or less (CLD <3cm), and a maximum heart distance of 1.5cm or less (MHD<1.5cm), the treatment plans could tolerate set-up errors of up to 1cm without any change in the NTCP to the heart.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an algorithm to compute the union, intersection and difference of two polygons using a scan-grid approach. Basically, in this method, the screen is divided into cells and the algorithm is applied to each cell in turn. The output from all the cells is integrated to yield a representation of the output polygon. In most cells, no computation is required and thus the algorithm is a fast one. The algorithm has been implemented for polygons but can be extended to polyhedra as well. The algorithm is shown to take O(N) time in the average case where N is the total number of edges of the two input polygons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sorghum (Sorghum bicolor (L.) Moench) is grown as a dryland crop in semiarid subtropical and tropical environments where it is often exposed to high temperatures around flowering. Projected climate change is likely to increase the incidence of exposure to high temperature, with potential adverse effects on growth, development and grain yield. The objectives of this study were to explore genetic variability for the effects of high temperature on crop growth and development, in vitro pollen germination and seed-set. Eighteen diverse sorghum genotypes were grown at day : night temperatures of 32 : 21 degrees C (optimum temperature, OT) and 38 : 21 degrees C (high temperature, HT during the middle of the day) in controlled environment chambers. HT significantly accelerated development, and reduced plant height and individual leaf size. However, there was no consistent effect on leaf area per plant. HT significantly reduced pollen germination and seed-set percentage of all genotypes; under HT, genotypes differed significantly in pollen viability percentage (17-63%) and seed-set percentage (7-65%). The two traits were strongly and positively associated (R-2 = 0.93, n = 36, P < 0.001), suggesting a causal association. The observed genetic variation in pollen and seed-set traits should be able to be exploited through breeding to develop heat-tolerant varieties for future climates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove a lower bound of Omega(1/epsilon (m + log(d - a)) where a = [log(m) (1/4epsilon)] for the hitting set size for combinatorial rectangles of volume at least epsilon in [m](d) space, for epsilon is an element of [m(-(d-2)), 2/9] and d > 2. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlation clustering problem is a fundamental problem in both theory and practice, and it involves identifying clusters of objects in a data set based on their similarity. A traditional modeling of this question as a graph theoretic problem involves associating vertices with data points and indicating similarity by adjacency. Clusters then correspond to cliques in the graph. The resulting optimization problem, Cluster Editing (and several variants) are very well-studied algorithmically. In many situations, however, translating clusters to cliques can be somewhat restrictive. A more flexible notion would be that of a structure where the vertices are mutually ``not too far apart'', without necessarily being adjacent. One such generalization is realized by structures called s-clubs, which are graphs of diameter at most s. In this work, we study the question of finding a set of at most k edges whose removal leaves us with a graph whose components are s-clubs. Recently, it has been shown that unless Exponential Time Hypothesis fail (ETH) fails Cluster Editing (whose components are 1-clubs) does not admit sub-exponential time algorithm STACS, 2013]. That is, there is no algorithm solving the problem in time 2 degrees((k))n(O(1)). However, surprisingly they show that when the number of cliques in the output graph is restricted to d, then the problem can be solved in time O(2(O(root dk)) + m + n). We show that this sub-exponential time algorithm for the fixed number of cliques is rather an exception than a rule. Our first result shows that assuming the ETH, there is no algorithm solving the s-Club Cluster Edge Deletion problem in time 2 degrees((k))n(O(1)). We show, further, that even the problem of deleting edges to obtain a graph with d s-clubs cannot be solved in time 2 degrees((k))n(O)(1) for any fixed s, d >= 2. This is a radical contrast from the situation established for cliques, where sub-exponential algorithms are known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A finite flexible perforated panel set in a differently perforated rigid baffle is considered. The radiation efficiency from such a panel is derived using a 2-D wavenumber domain formulation. This generalization is later used to represent a more practical case of a perforated panel fixed in an unperforated baffle. The perforations are in the form of an array of uniformly distributed circular holes. A complex impedance model for the holes available in the literature is used. An averaged fluid particle velocity is derived using the continuity equation and the surface pressure is derived using an appropriate momentum equation. The discontinuity in the perforate impedance (due to different hole dimensions or perforation ratio) at the panel-baffle interface is carefully taken into account. It is found that there exists a `coupling' of different wavenumbers of the spatially mean fluid particle velocity field. The change in the resonance frequencies and the modeshapes of the panel due to the perforations is taken into account using the Receptance method. Analytical expressions for the radiated power and radiation efficiency are derived in an integral form and numerical results are presented. Several comparisons are made to understand the radiation efficiency curves. Since both the resistive and reactive components of the hole impedance are taken into account, the model is directly applicable to micro-perforated panels also. (C) 2016 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.

First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.

Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.

Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of two independent chapters. The first chapter deals with universal algebra. It is shown, in von Neumann-Bernays-Gӧdel set theory, that free images of partial algebras exist in arbitrary varieties. It follows from this, as set-complete Boolean algebras form a variety, that there exist free set-complete Boolean algebras on any class of generators. This appears to contradict a well-known result of A. Hales and H. Gaifman, stating that there is no complete Boolean algebra on any infinite set of generators. However, it does not, as the algebras constructed in this chapter are allowed to be proper classes. The second chapter deals with positive elementary inductions. It is shown that, in any reasonable structure ᶆ, the inductive closure ordinal of ᶆ is admissible, by showing it is equal to an ordinal measuring the saturation of ᶆ. This is also used to show that non-recursively saturated models of the theories ACF, RCF, and DCF have inductive closure ordinals greater than ω.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure of the set ϐ(A) of all eigenvalues of all complex matrices (elementwise) equimodular with a given n x n non-negative matrix A is studied. The problem was suggested by O. Taussky and some aspects have been studied by R. S. Varga and B.W. Levinger.

If every matrix equimodular with A is non-singular, then A is called regular. A new proof of the P. Camion-A.J. Hoffman characterization of regular matrices is given.

The set ϐ(A) consists of m ≤ n closed annuli centered at the origin. Each gap, ɤ, in this set can be associated with a class of regular matrices with a (unique) permutation, π(ɤ). The association depends on both the combinatorial structure of A and the size of the aii. Let A be associated with the set of r permutations, π1, π2,…, πr, where each gap in ϐ(A) is associated with one of the πk. Then r ≤ n, even when the complement of ϐ(A) has n+1 components. Further, if π(ɤ) is the identity, the real boundary points of ɤ are eigenvalues of real matrices equimodular with A. In particular, if A is essentially diagonally dominant, every real boundary point of ϐ(A) is an eigenvalues of a real matrix equimodular with A.

Several conjectures based on these results are made which if verified would constitute an extension of the Perron-Frobenius Theorem, and an algebraic method is introduced which unites the study of regular matrices with that of ϐ(A).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IDOKI SCF Technologies S.L. is a technology-based company, set up on September 2006 in Derio (Biscay) with the main scope of developing extraction and purification processes based on the use of supercritical fluid extraction technology (SFE) in food processing, extraction of natural products and the production of personal care products. IDOKI¿s researchers have been working on many different R&D projects so far, most of them using this technology. However, the optimization of a SFE method for the different matrices cannot be performed unless we have an analytical method for the characterisation of the extracts obtained in each experiment. The analytical methods are also essential for the quality control of the raw materials that are going to be used and also for the final product. This PhD thesis was born to tackle this problem and therefore, it is based on the development of different analytical methods for the characterisation of the extracts and products. The projects that we could include in this thesis were the following: the extraction propolis, the recovery of agroindustrial residues (soy and wine) and the dealcoholisation of wine.On the one hand, for the extraction of propolis, several UV-Vis spectroscopic methods were used in order to measure the antioxidant capacity and the total polyphenol and flavonoid content of the extracts. A SFC method was also developed in order to measure more specific phenolic compounds. On the other hand, for the recovery of agroindustrial residues UV-Vis spectroscopy was used to determine the total polyphenol content and two SFC methods were developed to analyse different phenolic compounds. Extraction methods such as MAE, FUSE and rotary agitation were also evaluated for the characterisation of the raw materials.Finally, for the dealcoholisation of wine, the development of a SBSE-TD-GC-MS and DHS-TD-GC-MS methods for the analysis of aromas and a NIR spectroscopic method for the determination of ethanol content with the help of chemometrics was necessary. Most of these methods are typically used in IDOKI¿s lab as routine analyses apart from others not included in this PhD thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo fundamental deste trabalho é identificar os impactos dos incentivos fiscais federais concedidos pelo Governo brasileiro com a publicação da Lei do Bem, sobre os investimentos privados em P&D. A partir de estudo de campo realizado emgrandes empresas estabelecidas em habitats de inovação, especialmente em PqTgerido por universidade, foi analisado como a Lei do Bem auxilia a disseminação da cultura de inovação e aumenta a competitividade empresarial. Especificamente, o estudo tem o intuito de mostrar a importância da inclusão de forma mais abrangente dos gastos com infraestrutura de P&D, no rol das atividades passíveis de recebimento de incentivos fiscais por empresas localizadas em países notadamente carentes neste aspecto, tal qual o Brasil. Ademais, analisar comparativamente os mecanismos de incentivos fiscais utilizados por outros países com a intenção de propor adequações na estrutura da Lei do Bem que minimizem a sua não utilização em virtude da falta de clareza na sua aplicação e consequente adoção de postura conservadora pelas empresas. A metodologia consistiu de um estudo exploratório e qualitativo e revisão bibliográfica onde foram analisados os conceitos teóricos relacionados à inovação, sistemas nacionais, regionais e setoriais de inovação, hélice tríplice, habitats de inovação e políticas públicas, além da coleta de dados realizada por meio de consulta aos relatórios elaborados por entes governamentais, bem como por entrevistas realizadas junto às empresas que instalaram seus centros de P&D no PqT UFRJ, Consultorias especializadas e à ANPEI. Os resultados doestudoforam obtidos a partir da compilação dos dados destas entrevistas e relatórios. Além de outras conclusões, as informações permitiram afirmar que os incentivos fiscais, especialmente aqueles relacionados à redução do Imposto de Renda da Pessoa Jurídica, são importantes na medida em que permitem às grandes empresas, que já realizam atividades de P,D&I, a destinação de um montante maior a essas atividades. Apesar disso, essa política pública carece de aperfeiçoamento em função de haverrestado claro que a mesma não estimula todas as atividades de inovação, mas apenas aquelas relacionadas à P&D, além de não haver incentivos adequados ao crescimento de infraestrutura para inovação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Response to external electric field (D. C.) of three different varieties of fish namely Puntius ticto, Heteropneustis fossilis and Tilapia mossambica having different anatomical and behavioural characteristics were studied. Clearly distinguished reactions occurred one after another m all the varieties of fish with the increase in field intensity with minor specific variations. These reactions can be broadly classified into initial start (first reaction), forced swimming (galvanotaxis), slackening of body muscle (galvanonarcosis) and state of muscular rigidity (tetanus). The orientation of the organism (projection of nervous element) to the surrounding field has been found to have important bearing on the behaviour reactions. Clearly differentiated anodic taxis and true narcosis set in when fish body axis was parallel to the lines of current conduction. But with greater angle between the body axis and the current lines, fish did not show well marked reactions. Fish body, when perpendicular to current lines responded for anodic curvature and off balance swimming followed by tetanus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract—There are sometimes occasions when ultrasound beamforming is performed with only a subset of the total data that will eventually be available. The most obvious example is a mechanically-swept (wobbler) probe in which the three-dimensional data block is formed from a set of individual B-scans. In these circumstances, non-blind deconvolution can be used to improve the resolution of the data. Unfortunately, most of these situations involve large blocks of three-dimensional data. Furthermore, the ultrasound blur function varies spatially with distance from the transducer. These two facts make the deconvolution process time-consuming to implement. This paper is about ways to address this problem and produce spatially-varying deconvolution of large blocks of three-dimensional data in a matter of seconds. We present two approaches, one based on hardware and the other based on software. We compare the time they each take to achieve similar results and discuss the computational resources and form of blur model that each requires.