943 resultados para fixed point method
Resumo:
In a paper by Biro et al. [7], a novel twist on guarding in art galleries is introduced. A beacon is a fixed point with an attraction pull that can move points within the polygon. Points move greedily to monotonically decrease their Euclidean distance to the beacon by moving straight towards the beacon or sliding on the edges of the polygon. The beacon attracts a point if the point eventually reaches the beacon. Unlike most variations of the art gallery problem, the beacon attraction has the intriguing property of being asymmetric, leading to separate definitions of attraction region and inverse attraction region. The attraction region of a beacon is the set of points that it attracts. For a given point in the polygon, the inverse attraction region is the set of beacon locations that can attract the point. We first study the characteristics of beacon attraction. We consider the quality of a "successful" beacon attraction and provide an upper bound of $\sqrt{2}$ on the ratio between the length of the beacon trajectory and the length of the geodesic distance in a simple polygon. In addition, we provide an example of a polygon with holes in which this ratio is unbounded. Next we consider the problem of computing the shortest beacon watchtower in a polygonal terrain and present an $O(n \log n)$ time algorithm to solve this problem. In doing this, we introduce $O(n \log n)$ time algorithms to compute the beacon kernel and the inverse beacon kernel in a monotone polygon. We also prove that $\Omega(n \log n)$ time is a lower bound for computing the beacon kernel of a monotone polygon. Finally, we study the inverse attraction region of a point in a simple polygon. We present algorithms to efficiently compute the inverse attraction region of a point for simple, monotone, and terrain polygons with respective time complexities $O(n^2)$, $O(n \log n)$ and $O(n)$. We show that the inverse attraction region of a point in a simple polygon has linear complexity and the problem of computing the inverse attraction region has a lower bound of $\Omega(n \log n)$ in monotone polygons and consequently in simple polygons.
Resumo:
This paper presents an existence and localization result of unbounded solutions for a second-order differential equation on the half-line with functional boundary conditions. By applying unbounded upper and lower solutions, Green's functions and Schauder fixed point theorem, the existence of at least one solution is shown for the above problem. One example and one application to an Emden-Fowler equation are shown to illustrate our results.
Resumo:
This paper presents an existence and localization result of unbounded solutions for a second-order differential equation on the half-line with functional boundary conditions. By applying unbounded upper and lower solutions, Green's functions and Schauder fixed point theorem, the existence of at least one solution is shown for the above problem. One example and one application to an Emden-Fowler equation are shown to illustrate our results.
Resumo:
In this work the fundamental ideas to study properties of QFTs with the functional Renormalization Group are presented and some examples illustrated. First the Wetterich equation for the effective average action and its flow in the local potential approximation (LPA) for a single scalar field is derived. This case is considered to illustrate some techniques used to solve the RG fixed point equation and study the properties of the critical theories in D dimensions. In particular the shooting methods for the ODE equation for the fixed point potential as well as the approach which studies a polynomial truncation with a finite number of couplings, which is convenient to study the critical exponents. We then study novel cases related to multi field scalar theories, deriving the flow equations for the LPA truncation, both without assuming any global symmetry and also specialising to cases with a given symmetry, using truncations based on polynomials of the symmetry invariants. This is used to study possible non perturbative solutions of critical theories which are extensions of known perturbative results, obtained in the epsilon expansion below the upper critical dimension.
Resumo:
This study investigated the coralligenous reefs' benthic assemblages at 6 sites off Chioggia, in the northern Adriatic Sea, comparing 2 different methods of analysis of photographic samples: the grid method (overlapping a grid of 400 cells) and the random point method (random distribution of 100 points on the photo). For the first method, taxonomic recognition and the percentage coverage estimations were performed manually using photoQuad software. In the second, CoralNet semi-automated web-based annotation system was applied. This allows for assisted and supervised identification, the success rate of which gradually improves after initial software training. The results obtained with the two methods of analysing photographic samples are slightly different. The random points method gives lower species richness values and some differences in coverage estimations; all of this is reflected in the calculation of the biotic index. NAMBER values are significantly lower with the random points method and provide locally different classifications (3 out of 6 sites). However, the results obtained with the two methods are closely related to each other and depict a similar spatial trend. These results rise caution in applying different, albeit similar, methods in the analysis of benthic assemblages aimed to environmental quality assessment.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.
Resumo:
The Golgi method has been used for over a century to describe the general morphology of neurons in the nervous system of different species. The ""single-section"" Golgi method of Gabbott and Somogyi (1984) and the modifications made by Izzo et al. (1987) are able to produce consistent results. Here, we describe procedures to show cortical and subcortical neurons of human brains immersed in formalin for months or even years. The tissue was sliced with a vibratome, post-fixed in a combination of paraformaldehyde and picric acid in phosphate buffer, followed by osmium tetroxide and potassium dicromate, ""sandwiched"" between cover slips, and immersed in silver nitrate. The whole procedure takes between 5 and 11 days to achieve good results. The Golgi method has its characteristic pitfalls but, with this procedure, neurons and glia appear well-impregnated, allowing qualitative and quantitative studies under light microscopy. This contribution adds to the basic techniques for the study of human nervous tissue with the same advantages described for the ""single-section"" Golgi method in other species; it is easy and fast, requires minimal equipment, and provides consistent results. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Hereditary nonpolyposis colorectal cancer syndrome (HNPCC) is an autosomal dominant condition accounting for 2–5% of all colorectal carcinomas as well as a small subset of endometrial, upper urinary tract and other gastrointestinal cancers. An assay to detect the underlying defect in HNPCC, inactivation of a DNA mismatch repair enzyme, would be useful in identifying HNPCC probands. Monoclonal antibodies against hMLH1 and hMSH2, two DNA mismatch repair proteins which account for most HNPCC cancers, are commercially available. This study sought to investigate the potential utility of these antibodies in determining the expression status of these proteins in paraffin-embedded formalin-fixed tissue and to identify key technical protocol components associated with successful staining. A set of 20 colorectal carcinoma cases of known hMLH1 and hMSH2 mutation and expression status underwent immunoperoxidase staining at multiple institutions, each of which used their own technical protocol. Staining for hMSH2 was successful in most laboratories while staining for hMLH1 proved problematic in multiple labs. However, a significant minority of laboratories demonstrated excellent results including high discriminatory power with both monoclonal antibodies. These laboratories appropriately identified hMLH1 or hMSH2 inactivation with high sensitivity and specificity. The key protocol point associated with successful staining was an antigen retrieval step involving heat treatment and either EDTA or citrate buffer. This study demonstrates the potential utility of immunohistochemistry in detecting HNPCC probands and identifies key technical components for successful staining.
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
Background: Variable definitions of outcome (Constant score, Simple Shoulder Test [SST]) have been used to assess outcome after shoulder treatment, although none has been accepted as the universal standard. Physicians lack an objective method to reliably assess the activity of their patients in dynamic conditions. Our purpose was to clinically validate the shoulder kinematic scores given by a portable movement analysis device, using the activities of daily living described in the SST as a reference. The secondary objective was to determine whether this device could be used to document the effectiveness of shoulder treatments (for glenohumeral osteoarthritis and rotator cuff disease) and detect early failures.Methods: A clinical trial including 34 patients and a control group of 31 subjects over an observation period of 1 year was set up. Evaluations were made at baseline and 3, 6, and 12 months after surgery by 2 independent observers. Miniature sensors (3-dimensional gyroscopes and accelerometers) allowed kinematic scores to be computed. They were compared with the regular outcome scores: SST; Disabilities of the Arm, Shoulder and Hand; American Shoulder and Elbow Surgeons; and Constant.Results: Good to excellent correlations (0.61-0.80) were found between kinematics and clinical scores. Significant differences were found at each follow-up in comparison with the baseline status for all the kinematic scores (P < .015). The kinematic scores were able to point out abnormal patient outcomes at the first postoperative follow-up.Conclusion: Kinematic scores add information to the regular outcome tools. They offer an effective way to measure the functional performance of patients with shoulder pathology and have the potential to detect early treatment failures.Level of evidence: Level II, Development of Diagnostic Criteria, Diagnostic Study. (C) 2011 Journal of Shoulder and Elbow Surgery Board of Trustees.
Resumo:
A new method is used to estimate the volumes of sediments of glacial valleys. This method is based on the concept of sloping local base level and requires only a digital terrain model and the limits of the alluvial valleys as input data. The bedrock surface of the glacial valley is estimated by a progressive excavation of the digital elevation model (DEM) of the filled valley area. This is performed using an iterative routine that replaces the altitude of a point of the DEM by the mean value of its neighbors minus a fixed value. The result is a curved surface, quadratic in 2D. The bedrock surface of the Rhone Valley in Switzerland was estimated by this method using the free digital terrain model Shuttle Radar Topography Mission (SRTM) (~92 m resolution). The results obtained are in good agreement with the previous estimations based on seismic profiles and gravimetric modeling, with the exceptions of some particular locations. The results from the present method and those from the seismic interpretation are slightly different from the results of the gravimetric data. This discrepancy may result from the presence of large buried landslides in the bottom of the Rhone Valley.