12 resultados para application technique
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
The biplot has proved to be a powerful descriptive and analytical tool in many areasof applications of statistics. For compositional data the necessary theoreticaladaptation has been provided, with illustrative applications, by Aitchison (1990) andAitchison and Greenacre (2002). These papers were restricted to the interpretation ofsimple compositional data sets. In many situations the problem has to be described insome form of conditional modelling. For example, in a clinical trial where interest isin how patients’ steroid metabolite compositions may change as a result of differenttreatment regimes, interest is in relating the compositions after treatment to thecompositions before treatment and the nature of the treatments applied. To study thisthrough a biplot technique requires the development of some form of conditionalcompositional biplot. This is the purpose of this paper. We choose as a motivatingapplication an analysis of the 1992 US President ial Election, where interest may be inhow the three-part composition, the percentage division among the three candidates -Bush, Clinton and Perot - of the presidential vote in each state, depends on the ethniccomposition and on the urban-rural composition of the state. The methodology ofconditional compositional biplots is first developed and a detailed interpretation of the1992 US Presidential Election provided. We use a second application involving theconditional variability of tektite mineral compositions with respect to major oxidecompositions to demonstrate some hazards of simplistic interpretation of biplots.Finally we conjecture on further possible applications of conditional compositionalbiplots
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
[spa] El índice del máximo y el mínimo nivel es una técnica muy útil, especialmente para toma de decisiones, que usa la distancia de Hamming y el coeficiente de adecuación en el mismo problema. En este trabajo, se propone una generalización a través de utilizar medias generalizadas y cuasi aritméticas. A estos operadores de agregación, se les denominará el índice del máximo y el mínimo nivel medio ponderado ordenado generalizado (GOWAIMAM) y cuasi aritmético (Quasi-OWAIMAM). Estos nuevos operadores generalizan una amplia gama de casos particulares como el índice del máximo y el mínimo nivel generalizado (GIMAM), el OWAIMAM, y otros. También se desarrolla una aplicación en la toma de decisiones sobre selección de productos.
Resumo:
[spa] El índice del máximo y el mínimo nivel es una técnica muy útil, especialmente para toma de decisiones, que usa la distancia de Hamming y el coeficiente de adecuación en el mismo problema. En este trabajo, se propone una generalización a través de utilizar medias generalizadas y cuasi aritméticas. A estos operadores de agregación, se les denominará el índice del máximo y el mínimo nivel medio ponderado ordenado generalizado (GOWAIMAM) y cuasi aritmético (Quasi-OWAIMAM). Estos nuevos operadores generalizan una amplia gama de casos particulares como el índice del máximo y el mínimo nivel generalizado (GIMAM), el OWAIMAM, y otros. También se desarrolla una aplicación en la toma de decisiones sobre selección de productos.
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Resumo:
A novel and simple procedure for concentrating adenoviruses from seawater samples is described. The technique entails the adsorption of viruses to pre-flocculated skimmed milk proteins, allowing the flocs to sediment by gravity, and dissolving the separated sediment in phosphate buffer. Concentrated virus may be detected by PCR techniques following nucleic acid extraction. The method requires no specialized equipment other than that usually available in routine public health laboratories, and due to its straightforwardness it allows the processing of a larger number of water samples simultaneously. The usefulness of the method was demonstrated in concentration of virus in multiple seawater samples during a survey of adenoviruses in coastal waters.
Resumo:
Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.
Resumo:
Logistic regression is included into the analysis techniques which are valid for observationalmethodology. However, its presence at the heart of thismethodology, and more specifically in physical activity and sports studies, is scarce. With a view to highlighting the possibilities this technique offers within the scope of observational methodology applied to physical activity and sports, an application of the logistic regression model is presented. The model is applied in the context of an observational design which aims to determine, from the analysis of use of the playing area, which football discipline (7 a side football, 9 a side football or 11 a side football) is best adapted to the child"s possibilities. A multiple logistic regression model can provide an effective prognosis regarding the probability of a move being successful (reaching the opposing goal area) depending on the sector in which the move commenced and the football discipline which is being played.
Resumo:
In this article, the objective is to demonstrate the effects of different decision styles on strategic decisions and likewise, on an organization. The technique that was presented in the study is based on the transformation of linguistic variables to numerical value intervals. In this model, the study benefits from fuzzy logic methodology and fuzzy numbers. This fuzzy methodology approach allows us to examine the relations between decision making styles and strategic management processes when there is uncertainty. The purpose is to provide results to companies that may help them to exercise the most appropriate decision making style for its different strategic management processes. The study is leaving more research topics for further studies that may be applied to other decision making areas within the strategic management process.
Resumo:
We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR) with Model-Based MDR (MB-MDR). We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR), which is a generalization of Multifactor Dimensionality Reduction (MDR) to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main) effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM) is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC), an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS). This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.
Resumo:
The present study describes some of the applications of ultrasound in bone surgery, based on the presentation of two clinical cases. The Piezosurgery® ultrasound device was used (Tecnología Mectron Medical, Carasco, Italy). In one case the instrument was used to harvest a chin bone graft for placement in a bone defect at level 1.2, while in the other case a bony window osteotomy was made in the external wall of the maxillary sinus, in the context of a sinus membrane lift procedure. The Piezosurgery® device produces specific ultrasound frequency modulation (25-29 kHz), and has been designed to secure increased precision in application to bone surgery. This instrument produces selective sectioning of the mineralized bone structures, and causes less intra- and postoperative bleeding. One of the advantages of the Piezosurgery® device is that it can be used for maxillary sinus lift procedures in dental implant placement. In this context it considerably lessens the risk of sinus mucosa laceration by preparing the bony window in the external wall of the upper maxilla, and can be used to complete the lifting maneuver. The use of ultrasound in application to hard tissues can be regarded as a slow technique compared with the conventional rotary instruments, since it requires special surgical skill and involves a certain learning curve.