935 resultados para semiconvex extensions
Resumo:
A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.
Resumo:
We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow.
Resumo:
Among the soils in the Mato Grosso do Sul, stand out in the Pantanal biome, the Spodosols. Despite being recorded in considerable extensions, few studies aiming to characterize and classify these soils were performed. The purpose of this study was to characterize and classify soils in three areas of two physiographic types in the Taquari river basin: bay and flooded fields. Two trenches were opened in the bay area (P1 and P2) and two in the flooded field (P3 and P4). The third area (saline) with high sodium levels was sampled for further studies. In the soils in both areas the sand fraction was predominant and the texture from sand to sandy loam, with the main constituent quartz. In the bay area, the soil organic carbon in the surface layer (P1) was (OC) > 80 g kg-1, being diagnosed as Histic epipedon. In the other profiles the surface horizons had low OC levels which, associated with other properties, classified them as Ochric epipedons. In the soils of the bay area (P1 and P2), the pH ranged from 5.0 to 7.5, associated with dominance of Ca2+ and Mg2+, with base saturation above 50 % in some horizons. In the flooded fields (P3 and P4) the soil pH ranged from 4.9 to 5.9, H+ contents were high in the surface horizons (0.8-10.5 cmol c kg-1 ), Ca2+ and Mg² contents ranged from 0.4 to 0.8 cmol c kg-1 and base saturation was < 50 %. In the soils of the bay area (P1 and P2) iron was accumulated (extracted by dithionite - Fed) and OC in the spodic horizon; in the P3 and P4 soils only Fed was accumulated (in the subsurface layers). According to the criteria adopted by the Brazilian System of Soil Classification (SiBCS) at the subgroup level, the soils were classified as: P1: Organic Hydromorphic Ferrohumiluvic Spodosol. P2: Typical Orthic Ferrohumiluvic Spodosol. P3: Typical Hydromorphic Ferroluvic Spodosol. P4: Arenic Orthic Ferroluvic Spodosol.
Resumo:
Neuronal oscillations are an important aspect of EEG recordings. These oscillations are supposed to be involved in several cognitive mechanisms. For instance, oscillatory activity is considered a key component for the top-down control of perception. However, measuring this activity and its influence requires precise extraction of frequency components. This processing is not straightforward. Particularly, difficulties with extracting oscillations arise due to their time-varying characteristics. Moreover, when phase information is needed, it is of the utmost importance to extract narrow-band signals. This paper presents a novel method using adaptive filters for tracking and extracting these time-varying oscillations. This scheme is designed to maximize the oscillatory behavior at the output of the adaptive filter. It is then capable of tracking an oscillation and describing its temporal evolution even during low amplitude time segments. Moreover, this method can be extended in order to track several oscillations simultaneously and to use multiple signals. These two extensions are particularly relevant in the framework of EEG data processing, where oscillations are active at the same time in different frequency bands and signals are recorded with multiple sensors. The presented tracking scheme is first tested with synthetic signals in order to highlight its capabilities. Then it is applied to data recorded during a visual shape discrimination experiment for assessing its usefulness during EEG processing and in detecting functionally relevant changes. This method is an interesting additional processing step for providing alternative information compared to classical time-frequency analyses and for improving the detection and analysis of cross-frequency couplings.
Resumo:
Synchronization phenomena in large populations of interacting elements are the subject of intense research efforts in physical, biological, chemical, and social systems. A successful approach to the problem of synchronization consists of modeling each member of the population as a phase oscillator. In this review, synchronization is analyzed in one of the most representative models of coupled phase oscillators, the Kuramoto model. A rigorous mathematical treatment, specific numerical methods, and many variations and extensions of the original model that have appeared in the last few years are presented. Relevant applications of the model in different contexts are also included.
Resumo:
SUMMARY: A top scoring pair (TSP) classifier consists of a pair of variables whose relative ordering can be used for accurately predicting the class label of a sample. This classification rule has the advantage of being easily interpretable and more robust against technical variations in data, as those due to different microarray platforms. Here we describe a parallel implementation of this classifier which significantly reduces the training time, and a number of extensions, including a multi-class approach, which has the potential of improving the classification performance. AVAILABILITY AND IMPLEMENTATION: Full C++ source code and R package Rgtsp are freely available from http://lausanne.isb-sib.ch/~vpopovic/research/. The implementation relies on existing OpenMP libraries.
Resumo:
Background/Purpose: Gouty arthritis (GA) is a chronic inflammatory disease. Targeting the inflammatory pathway through IL-1_ inhibition with canakinumab (CAN) may provide significant long-term benefits. CAN safety versus triamcinolone acetonide (TA) over initial 24 weeks (blinded study) for patients (pts) with history of frequent attacks (_3 in year before baseline) was reported earlier from core (_-RELIEVED [_-REL] and _-REL-II) and first extension (E1) studies1. Herein we present full 18-month long-term CAN safety data, including open-label second extension (E2) studies. Methods: GA pts completing _-REL E1 and _-REL-II E1 studies1 were enrolled in these 1-year, open-label, E2 studies. All pts entering E2, whether randomized to CAN or TA, received CAN 150 mg sc on demand upon new attack. Data are presented only for pts randomized to CAN, and are reported cumulatively, i.e. including corresponding data from previously reported core and E1 studies. Long-term safety outcomes and safety upon re-treatment are presented as incidence rate per 100 patient-years (pyr) of study participation for AEs and SAEs. Deaths are reported for all pts (randomized to CAN or TA). Selected predefined notable laboratory abnormalities are shown (neutrophils, platelets, liver and renal function tests). Long-term attack rate per year is also provided. Results: In total, 69/115 (60%) and 72/112 (64.3%) of the pts randomized to CAN in the two core studies entered the two E2 studies, of which 68 and 64 pts, respectively completed the E2 studies. The 2 study populations had differing baseline comorbidity and geographic origin. Lab data (not time adjusted) for neutropenia appears worse after retreatment in _-REL E2, and deterioration of creatinine clearance appears worse after retreatment (Table 1). The time-adjusted incidence rates for AEs were 302.4/100 pyr and 360/100 pyr, and for SAEs were 27.9/100 pyr and 13.9/100 pyr in _-REL E2 and _-REL-II E2 respectively (Table 1). The time-adjusted incidence rates of any AEs, infection AEs, any SAEs, and selected SAEs before and after re-treatment are presented in Table 1. Incidence rates for AEs and SAEs declined after re-treatment, with the exception of SAEs in _-REL-II E2, which increased from 2.9/100 pyr to 10.9/100 pyr (no infection SAEs after retreatment in _-REL-II E2, and other SAEs fit no special pattern). In the total safety population (N_454, core and all extensions), there were 4 deaths, 2 in the core studies previously reported1 and 2 during the _-REL E2 study (one patient in the CAN group died from pneumonia; one patient in the TA group who never received CAN died of pneumococcal sepsis). None of the deaths was suspected by investigators to be study drug related. The mean rates of new attacks per year on CAN were 1.21 and 1.18 in _-REL E2 and in _-REL-II E2. Conclusion: The clinical safety profile of CAN upon re-treatment was maintained long-term with no new infection concerns
Resumo:
Abstract In social insects, workers perform a multitude of tasks, such as foraging, nest construction, and brood rearing, without central control of how work is allocated among individuals. It has been suggested that workers choose a task by responding to stimuli gathered from the environment. Response-threshold models assume that individuals in a colony vary in the stimulus intensity (response threshold) at which they begin to perform the corresponding task. Here we highlight the limitations of these models with respect to colony performance in task allocation. First, we show with analysis and quantitative simulations that the deterministic response-threshold model constrains the workers' behavioral flexibility under some stimulus conditions. Next, we show that the probabilistic response-threshold model fails to explain precise colony responses to varying stimuli. Both of these limitations would be detrimental to colony performance when dynamic and precise task allocation is needed. To address these problems, we propose extensions of the response-threshold model by adding variables that weigh stimuli. We test the extended response-threshold model in a foraging scenario and show in simulations that it results in an efficient task allocation. Finally, we show that response-threshold models can be formulated as artificial neural networks, which consequently provide a comprehensive framework for modeling task allocation in social insects.
Resumo:
With European Monetary Union (EMU), there was an increase in the adjusted spreads (corrected from the foreign exchange risk) of euro participating countries' sovereign securities over Germany and a decrease in those of non-euro countries. The objective of this paper is to study the reasons for this result, and in particular, whether the change in the price assigned by markets was due to domestic factors such as credit risk and/or market liquidity, or to international risk factors. The empirical evidence suggests that market size scale economies have increased since EMU for all European markets, so the effect of the various risk factors, even though it differs between euro and non-euro countries, is always dependent on the size of the market.
Resumo:
The present paper focuses on the analysis and discussion of a likelihood ratio (LR) development for propositions at a hierarchical level known in the context as 'offence level'. Existing literature on the topic has considered LR developments for so-called offender to scene transfer cases. These settings involve-in their simplest form-a single stain found on a crime scene, but with possible uncertainty about the degree to which that stain is relevant (i.e. that it has been left by the offender). Extensions to multiple stains or multiple offenders have also been reported. The purpose of this paper is to discuss a development of a LR for offence level propositions when case settings involve potential transfer in the opposite direction, i.e. victim/scene to offender transfer. This setting has previously not yet been considered. The rationale behind the proposed LR is illustrated through graphical probability models (i.e. Bayesian networks). The role of various uncertain parameters is investigated through sensitivity analyses as well as simulations.
Resumo:
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
Resumo:
Morphogen gradients infer cell fate as a function of cellular position. Experiments in Drosophila embryos have shown that the Bicoid (Bcd) gradient is precise and exhibits some degree of scaling. We present experimental results on the precision of Bcd target genes for embryos with a single, double or quadruple dose of bicoid demonstrating that precision is highest at mid-embryo and position dependent, rather than gene dependent. This confirms that the major contribution to precision is achieved already at the Bcd gradient formation. Modeling this dynamic process, we investigate precision for inter-embryo fluctuations in different parameters affecting gradient formation. Within our modeling framework, the observed precision can only be achieved by a transient Bcd profile. Studying different extensions of our modeling framework reveals that scaling is generally position dependent and decreases toward the posterior pole. Our measurements confirm this trend, indicating almost perfect scaling except for anterior most expression domains, which overcompensate fluctuations in embryo length.
Resumo:
Preoperative imaging for resection of chest wall malignancies is generally performed by computed tomography (CT). We evaluated the role of (18)F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in planning full-thickness chest wall resections for malignancies. We retrospectively included 18 consecutive patients operated from 2004 to 2006 at our institution. Tumor extent was measured by CT and PET, using the two largest perpendicular tumor extensions in the chest wall plane to compute the tumor surface assuming an elliptical shape. Imaging measurements were compared to histopathology assessment of tumor borders. CT assessment consistently overestimated the tumor size as compared to PET (+64% vs. +1%, P<0.001). Moreover, PET was significantly better than CT at defining the size of lesions >24 cm(2) corresponding to a mean diameter >5.5 cm or an ellipse of >4 cm x 7.6 cm (positive predictive value 80% vs. 44% and specificity 93% vs. 64%, respectively). Metabolic PET imaging was superior to CT for defining the extent of chest wall tumors, particularly for tumors with a diameter >5.5 cm. PET can complement CT in planning full-thickness chest wall resection for malignancies, but its true value remains to be determined in larger, prospective studies.
Resumo:
There is a lack of dedicated tools for business model design at a strategic level. However, in today's economic world the need to be able to quickly reinvent a company's business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts: First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity. The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched. The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition. The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.