895 resultados para Multi-User-Single-Antenna (MUSA)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND High-risk prostate cancer (PCa) is an extremely heterogeneous disease. A clear definition of prognostic subgroups is mandatory. OBJECTIVE To develop a pretreatment prognostic model for PCa-specific survival (PCSS) in high-risk PCa based on combinations of unfavorable risk factors. DESIGN, SETTING, AND PARTICIPANTS We conducted a retrospective multicenter cohort study including 1360 consecutive patients with high-risk PCa treated at eight European high-volume centers. INTERVENTION Retropubic radical prostatectomy with pelvic lymphadenectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Two Cox multivariable regression models were constructed to predict PCSS as a function of dichotomization of clinical stage (< cT3 vs cT3-4), Gleason score (GS) (2-7 vs 8-10), and prostate-specific antigen (PSA; ≤ 20 ng/ml vs > 20 ng/ml). The first "extended" model includes all seven possible combinations; the second "simplified" model includes three subgroups: a good prognosis subgroup (one single high-risk factor); an intermediate prognosis subgroup (PSA >20 ng/ml and stage cT3-4); and a poor prognosis subgroup (GS 8-10 in combination with at least one other high-risk factor). The predictive accuracy of the models was summarized and compared. Survival estimates and clinical and pathologic outcomes were compared between the three subgroups. RESULTS AND LIMITATIONS The simplified model yielded an R(2) of 33% with a 5-yr area under the curve (AUC) of 0.70 with no significant loss of predictive accuracy compared with the extended model (R(2): 34%; AUC: 0.71). The 5- and 10-yr PCSS rates were 98.7% and 95.4%, 96.5% and 88.3%, 88.8% and 79.7%, for the good, intermediate, and poor prognosis subgroups, respectively (p = 0.0003). Overall survival, clinical progression-free survival, and histopathologic outcomes significantly worsened in a stepwise fashion from the good to the poor prognosis subgroups. Limitations of the study are the retrospective design and the long study period. CONCLUSIONS This study presents an intuitive and easy-to-use stratification of high-risk PCa into three prognostic subgroups. The model is useful for counseling and decision making in the pretreatment setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Oesophageal adenocarcinoma or Barrett's adenocarcinoma (EAC) is increasing in incidence and stratification of prognosis might improve disease management. Multi-colour fluorescence in situ hybridisation (FISH) investigating ERBB2, MYC, CDKN2A and ZNF217 has recently shown promising results for the diagnosis of dysplasia and cancer using cytological samples. METHODS To identify markers of prognosis we targeted four selected gene loci using multi-colour FISH applied to a tissue microarray containing 130 EAC samples. Prognostic predictors (P1, P2, P3) based on genomic copy numbers of the four loci were statistically assessed to stratify patients according to overall survival in combination with clinical data. RESULTS The best stratification into favourable and unfavourable prognoses was shown by P1, percentage of cells with less than two ZNF217 signals; P2, percentage of cells with fewer ERBB2- than ZNF217 signals; and P3, overall ratio of ERBB2-/ZNF217 signals. Median survival times for P1 were 32 vs 73 months, 28 vs 73 months for P2; and 27 vs 65 months for P3. Regarding each tumour grade P2 subdivided patients into distinct prognostic groups independently within each grade, with different median survival times of at least 35 months. CONCLUSIONS Cell signal number of the ERBB2 and ZNF217 loci showed independence from tumour stage and differentiation grade. The prognostic value of multi-colour FISH-assays is applicable to EAC and is superior to single markers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Among other mismatches between human and pig, incompatibilities in the blood coagulation systems hamper the xenotransplantation of vascularized organs. The provision of the porcine endothelium with human thrombomodulin (hTM) is hypothesized to overcome the impaired activation of protein C by a heterodimer consisting of human thrombin and porcine TM. METHODS We evaluated regulatory regions of the THBD gene, optimized vectors for transgene expression, and generated hTM expressing pigs by somatic cell nuclear transfer. Genetically modified pigs were characterized at the molecular, cellular, histological, and physiological levels. RESULTS A 7.6-kb fragment containing the entire upstream region of the porcine THBD gene was found to drive a high expression in a porcine endothelial cell line and was therefore used to control hTM expression in transgenic pigs. The abundance of hTM was restricted to the endothelium, according to the predicted pattern, and the transgene expression of hTM was stably inherited to the offspring. When endothelial cells from pigs carrying the hTM transgene--either alone or in combination with an aGalTKO and a transgene encoding the human CD46-were tested in a coagulation assay with human whole blood, the clotting time was increased three- to four-fold (P<0.001) compared to wild-type and aGalTKO/CD46 transgenic endothelial cells. This, for the first time, demonstrated the anticoagulant properties of hTM on porcine endothelial cells in a human whole blood assay. CONCLUSIONS The biological efficacy of hTM suggests that the (multi-)transgenic donor pigs described here have the potential to overcome coagulation incompatibilities in pig-to-primate xenotransplantation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software corpora facilitate reproducibility of analyses, however, static analysis for an entire corpus still requires considerable effort, often duplicated unnecessarily by multiple users. Moreover, most corpora are designed for single languages increasing the effort for cross-language analysis. To address these aspects we propose Pangea, an infrastructure allowing fast development of static analyses on multi-language corpora. Pangea uses language-independent meta-models stored as object model snapshots that can be directly loaded into memory and queried without any parsing overhead. To reduce the effort of performing static analyses, Pangea provides out-of-the box support for: creating and refining analyses in a dedicated environment, deploying an analysis on an entire corpus, using a runner that supports parallel execution, and exporting results in various formats. In this tool demonstration we introduce Pangea and provide several usage scenarios that illustrate how it reduces the cost of analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a measurement of the cross-section for high transverse momentum W and Z bosons produced in pp collisions and decaying to allhadronic final states. The data used in the analysis were recorded by the ATLAS detector at the CERN Large Hadron Collider at a centre-of-mass energy of s = 7 TeV and correspond to an integrated luminosity of 4.6 fb−1. The measurement is performed by reconstructing the boosted W or Z bosons in single jets. The reconstructed jet mass is used to identify the W and Z bosons, and a jet substructure method based on energy cluster information in the jet centre-of mass frame is used to suppress the large multi-jet background. The cross-section for events with a hadronically decaying W or Z boson, with transverse momentum pT > 320 GeV and pseudorapidity |η| < 1.9, is measured to be σ + = ± W Z 8.5 1.7 pb and is compared to next-to-leading-order calculations. The selected events are further used to study jet grooming techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The antimicrobial activity of taurolidine was compared with minocycline against microbial species associated with periodontitis (four single strains and a 12-species mixture). Minimal inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs), killing as well as activities on established and forming single-species biofilms and a 12-species biofilm were determined. The MICs of taurolidine against single species were always 0.31 mg/ml, the MBCs were 0.64 mg/ml. The used mixed microbiota was less sensitive to taurolidine, MIC and the MBC was 2.5 mg/ml. The strains and the mixture were completely killed by 2.5 mg/ml taurolidine, whereas 256 μg/ml minocycline reduced the bacterial counts of the mixture by 5 log10 colony forming units (cfu). Coating the surface with 10 mg/ml taurolidine or 256 μg/ml minocycline prevented completely biofilm formation of Porphyromonas gingivalis ATCC 33277 but not of Aggregatibacter actinomycetemcomitans Y4 and the mixture. On 4.5 d old biofilms, taurolidine acted concentration dependent with a reduction by 5 log10 cfu (P. gingivalis ATCC 33277) and 7 log10 cfu (A. actinomycetemcomitans Y4) when applying 10 mg/ml. Minocycline decreased the cfu counts by 1-2 log10 cfu independent of the used concentration. The reduction of the cfu counts in the 4.5 d old multi-species biofilms was about 3 log10 cfu after application of any minocycline concentration and after using 10 mg/ml taurolidine. Taurolidine is active against species associated with periodontitis, even within biofilms. Nevertheless a complete elimination of complex biofilms by taurolidine seems to be impossible and underlines the importance of a mechanical removal of biofilms prior to application of taurolidine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Readout-segmented echo planar imaging (rs-EPI) significantly reduces susceptibility artifacts in diffusion-weighted imaging (DWI) of the breast compared to single-shot EPI but is limited by longer scan times. To compensate for this, we tested a new simultaneous multi-slice (SMS) acquisition for accelerated rs-EPI. MATERIALS AND METHODS After approval by the local ethics committee, eight healthy female volunteers (age, 38.9±13.1 years) underwent breast MRI at 3T. Conventional as well as two-fold (2× SMS) and three-fold (3× SMS) slice-accelerated rs-EPI sequences were acquired at b-values of 50 and 800s/mm(2). Two independent readers analyzed the apparent diffusion coefficient (ADC) in fibroglandular breast parenchyma. The signal-to-noise ratio (SNR) was estimated based on the subtraction method. ADC and SNR were compared between sequences by using the Friedman test. RESULTS The acquisition time was 4:21min for conventional rs-EPI, 2:35min for 2× SMS rs-EPI and 1:44min for 3× SMS rs-EPI. ADC values were similar in all sequences (mean values 1.62×10(-3)mm(2)/s, p=0.99). Mean SNR was 27.7-29.6, and no significant differences were found among the sequences (p=0.83). CONCLUSION SMS rs-EPI yields similar ADC values and SNR compared to conventional rs-EPI at markedly reduced scan time. Thus, SMS excitation increases the clinical applicability of rs-EPI for DWI of the breast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current study it is investigated whether peripheral vision can be used to monitor multi-ple moving objects and to detect single-target changes. For this purpose, in Experiment 1, a modified MOT setup with a large projection and a constant-position centroid phase had to be checked first. Classical findings regarding the use of a virtual centroid to track multiple ob-jects and the dependency of tracking accuracy on target speed could be successfully replicat-ed. Thereafter, the main experimental variations regarding the manipulation of to-be-detected target changes could be introduced in Experiment 2. In addition to a button press used for the detection task, gaze behavior was assessed using an integrated eye-tracking system. The anal-ysis of saccadic reaction times in relation to the motor response shows that peripheral vision is naturally used to detect motion and form changes in MOT because the saccade to the target occurred after target-change offset. Furthermore, for changes of comparable task difficulties, motion changes are detected better by peripheral vision than form changes. Findings indicate that capabilities of the visual system (e.g., visual acuity) affect change detection rates and that covert-attention processes may be affected by vision-related aspects like spatial uncertainty. Moreover, it is argued that a centroid-MOT strategy might reduce the amount of saccade-related costs and that eye-tracking seems to be generally valuable to test predictions derived from theories on MOT. Finally, implications for testing covert attention in applied settings are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anticancer drugs typically are administered in the clinic in the form of mixtures, sometimes called combinations. Only in rare cases, however, are mixtures approved as drugs. Rather, research on mixtures tends to occur after single drugs have been approved. The goal of this research project was to develop modeling approaches that would encourage rational preclinical mixture design. To this end, a series of models were developed. First, several QSAR classification models were constructed to predict the cytotoxicity, oral clearance, and acute systemic toxicity of drugs. The QSAR models were applied to a set of over 115,000 natural compounds in order to identify promising ones for testing in mixtures. Second, an improved method was developed to assess synergistic, antagonistic, and additive effects between drugs in a mixture. This method, dubbed the MixLow method, is similar to the Median-Effect method, the de facto standard for assessing drug interactions. The primary difference between the two is that the MixLow method uses a nonlinear mixed-effects model to estimate parameters of concentration-effect curves, rather than an ordinary least squares procedure. Parameter estimators produced by the MixLow method were more precise than those produced by the Median-Effect Method, and coverage of Loewe index confidence intervals was superior. Third, a model was developed to predict drug interactions based on scores obtained from virtual docking experiments. This represents a novel approach for modeling drug mixtures and was more useful for the data modeled here than competing approaches. The model was applied to cytotoxicity data for 45 mixtures, each composed of up to 10 selected drugs. One drug, doxorubicin, was a standard chemotherapy agent and the others were well-known natural compounds including curcumin, EGCG, quercetin, and rhein. Predictions of synergism/antagonism were made for all possible fixed-ratio mixtures, cytotoxicities of the 10 best-scoring mixtures were tested, and drug interactions were assessed. Predicted and observed responses were highly correlated (r2 = 0.83). Results suggested that some mixtures allowed up to an 11-fold reduction of doxorubicin concentrations without sacrificing efficacy. Taken together, the models developed in this project present a general approach to rational design of mixtures during preclinical drug development. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanomedicine is an innovative field of science which has recently generated many drug delivery platforms with exciting results. The great potential of these strategies rely on the unique characteristics of the devices at the nano-scale in terms of long time circulation in the blood stream, selective accumulation at the lesions sites, increased solubility in aqueous solutions, etc. Herein we report on a new drug delivery system known as a multistage system which is comprised of non-spherical, mesoporous silicon particles loaded with second stage nanoparticles. The rationally designed particle shape, the possibility to modulate the surface properties and the degree of porosity allow these carriers to be optimized for vascular targeting and to overcome the numerous biological barriers found in drug delivery. In this study we investigated the intra and inter cellular trafficking of the multistage system in endothelial cells bringing evidence of its bio-compatibility as well as its ability to perform multiple intra and inter cellular tasks. Once internalized in cells, the multi-particle construct is able to dissociate, localizing in different subcellular compartments which can be targeted for exocytosis. In particular the second stage nanoparticles were found to be secreted in microvesicles which can act as mediators of transfer of particles across the endothelium and between different endothelial and cancer cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate weight perception is important particularly in tasks where the user has to apply vertical forces to ensure safe landing of a fragile object or precise penetration of a surface with a probe. Moreover, depending on physical properties of objects such as weight and size we may switch between unimanual and bimanual manipulation during a task. Research has shown that bimanual manipulation of real objects results in a misperception of their weight: they tend to feel lighter than similarly heavy objects which are handled with one hand only [8]. Effective simulation of bimanual manipulation with desktop haptic interfaces should be able to replicate this effect of bimanual manipulation on weight perception. Here, we present the MasterFinger-2, a new multi-finger haptic interface allowing bimanual manipulation of virtual objects with precision grip and we conduct weight discrimination experiments to evaluate its capacity to simulate unimanual and bimanual weight. We found that the bimanual ‘lighter’ bias is also observed with the MasterFinger-2 but the sensitivity to changes of virtual weights deteriorated.