937 resultados para Optimal matching analysis.
Resumo:
The phosphatidylinositol 3-kinase (PI3K) pathway, through its major effector node AKT, is critical for the promotion of cell growth, division, motility and apoptosis evasion. This signaling axis is therefore commonly targeted in the form of mutations and amplifications in a myriad of malignancies. Glycogen synthase kinase 3 (GSK3) was first discovered as the kinase responsible for phosphorylating and inhibiting the activity of glycogen synthase, ultimately antagonizing the storage of glucose as glycogen. Its activity counteracts the effects of insulin in glucose metabolism and AKT has long been recognized as one of the key molecules capable of phosphorylating GSK3 and inhibiting its activity. However, here we demonstrate that GSK3 is required for optimal phosphorylation and activation of AKT in different malignant cell lines, and that this effect is independent of the type of growth factor stimulation and can happen even in basal states. Both GSK3 alpha and GSK3 beta isoforms are necessary for AKT to become fully active, displaying a redundant role in the setting. We also demonstrate that this effect of GSK3 on AKT phosphorylation and full activation is dependent on its kinase activity, since highly specific inhibitors targeting GSK3 catalytic activity also promote a reduction in phosphorylated AKT. Analysis of reverse phase protein array screening of MDA-MB-231 breast cancer cells treated with RNA interference targeting GSK3 unexpectedly revealed an increase in levels of phosphorylated MAPK14 (p38). Treatment with the selective p38 inhibitor SB 202190 rescued AKT activation in that cell line, corroborating the importance of unbiased proteomic analysis in exposing cross-talks between signaling networks and demonstrating a critical role for p38 in the regulation of AKT phosphorylation.
Resumo:
The major goal of this work was to understand the function of anionic phospholipid in E. coli cell metabolism. One important finding from this work is the requirement of anionic phospholipid for the DnaA protein-dependent initiation of DNA replication. An rnhA mutation, which bypasses the need for the DnaA protein through induction of constitutive stable DNA replication, suppressed the growth arrest phenotype of a $pgsA$ mutant in which the synthesis of anionic phospholipid was blocked. The maintenance of plasmids dependent on an $oriC$ site for replication, and therefore DnaA protein, was also compromised under conditions of limiting anionic phospholipid synthesis. These results provide support for the involvement of anionic phospholipids in normal initiation of DNA replication at oriC in vivo by the DnaA protein. In addition, structural and functional requirements of two major anionic phospholipids, phosphatidylglycerol and cardiolipin, were examined. Introduction into cells of the ability to make phosphatidylinositol did not suppress the need for the naturally occurring phosphatidylglycerol. The requirement for phosphatidylglycerol was concluded to be more than maintenance of the proper membrane surface charge. Examination of the role of cardiolipin revealed its ability to replace the zwitterionic phospholipid, phosphatidylethanolamine, in maintaining an optimal membrane lipid organization. This work also reported the DNA sequence of the cls gene, which encodes the CL synthase responsible for the synthesis of cardiolipin. ^
Resumo:
Many studies in biostatistics deal with binary data. Some of these studies involve correlated observations, which can complicate the analysis of the resulting data. Studies of this kind typically arise when a high degree of commonality exists between test subjects. If there exists a natural hierarchy in the data, multilevel analysis is an appropriate tool for the analysis. Two examples are the measurements on identical twins, or the study of symmetrical organs or appendages such as in the case of ophthalmic studies. Although this type of matching appears ideal for the purposes of comparison, analysis of the resulting data while ignoring the effect of intra-cluster correlation has been shown to produce biased results.^ This paper will explore the use of multilevel modeling of simulated binary data with predetermined levels of correlation. Data will be generated using the Beta-Binomial method with varying degrees of correlation between the lower level observations. The data will be analyzed using the multilevel software package MlwiN (Woodhouse, et al, 1995). Comparisons between the specified intra-cluster correlation of these data and the estimated correlations, using multilevel analysis, will be used to examine the accuracy of this technique in analyzing this type of data. ^
Resumo:
We examined outcomes and trends in surgery and radiation use for patients with locally advanced esophageal cancer, for whom optimal treatment isn't clear. Trends in surgery and radiation for patients with T1-T3N1M0 squamous cell or adenocarcinoma of the mid or distal esophagus in the Surveillance, Epidemiology, and End Results database from 1998 to 2008 were analyzed using generalized linear models including year as predictor; Surveillance, Epidemiology, and End Results doesn't record chemotherapy data. Local treatment was unimodal if patients had only surgery or radiation and bimodal if they had both. Five-year cancer-specific survival (CSS) and overall survival (OS) were analyzed using propensity-score adjusted Cox proportional-hazard models. Overall 5-year survival for the 3295 patients identified (mean age 65.1 years, standard deviation 11.0) was 18.9% (95% confidence interval: 17.3-20.7). Local treatment was bimodal for 1274 (38.7%) and unimodal for 2021 (61.3%) patients; 1325 (40.2%) had radiation alone and 696 (21.1%) underwent only surgery. The use of bimodal therapy (32.8-42.5%, P = 0.01) and radiation alone (29.3-44.5%, P < 0.001) increased significantly from 1998 to 2008. Bimodal therapy predicted improved CSS (hazard ratios [HR]: 0.68, P < 0.001) and OS (HR: 0.58, P < 0.001) compared with unimodal therapy. For the first 7 months (before survival curve crossing), CSS after radiation therapy alone was similar to surgery alone (HR: 0.86, P = 0.12) while OS was worse for surgery only (HR: 0.70, P = 0.001). However, worse CSS (HR: 1.43, P < 0.001) and OS (HR: 1.46, P < 0.001) after that initial timeframe were found for radiation therapy only. The use of radiation to treat locally advanced mid and distal esophageal cancers increased from 1998 to 2008. Survival was best when both surgery and radiation were used.
Resumo:
Directly imaged exoplanets are unexplored laboratories for the application of the spectral and temperature retrieval method, where the chemistry and composition of their atmospheres are inferred from inverse modeling of the available data. As a pilot study, we focus on the extrasolar gas giant HR 8799b, for which more than 50 data points are available. We upgrade our non-linear optimal estimation retrieval method to include a phenomenological model of clouds that requires the cloud optical depth and monodisperse particle size to be specified. Previous studies have focused on forward models with assumed values of the exoplanetary properties; there is no consensus on the best-fit values of the radius, mass, surface gravity, and effective temperature of HR 8799b. We show that cloud-free models produce reasonable fits to the data if the atmosphere is of super-solar metallicity and non-solar elemental abundances. Intermediate cloudy models with moderate values of the cloud optical depth and micron-sized particles provide an equally reasonable fit to the data and require a lower mean molecular weight. We report our best-fit values for the radius, mass, surface gravity, and effective temperature of HR 8799b. The mean molecular weight is about 3.8, while the carbon-to-oxygen ratio is about unity due to the prevalence of carbon monoxide. Our study emphasizes the need for robust claims about the nature of an exoplanetary atmosphere to be based on analyses involving both photometry and spectroscopy and inferred from beyond a few photometric data points, such as are typically reported for hot Jupiters.
Resumo:
BACKGROUND AND AIM So far there is little evidence from randomised clinical trials (RCT) or systematic reviews on the preferred or best number of implants to be used for the support of a fixed prosthesis in the edentulous maxilla or mandible, and no consensus has been reached. Therefore, we reviewed articles published in the past 30 years that reported on treatment outcomes for implant-supported fixed prostheses, including survival of implants and survival of prostheses after a minimum observation period of 1 year. MATERIAL AND METHODS MEDLINE and EMBASE were searched to identify eligible studies. Short and long-term clinical studies were included with prospective and retrospective study designs to see if relevant information could be obtained on the number of implants related to the prosthetic technique. Articles reporting on implant placement combined with advanced surgical techniques such as sinus floor elevation (SFE) or extensive grafting were excluded. Two reviewers extracted the data independently. RESULTS A primary search was broken down to 222 articles. Out of these, 29 studies comprising 26 datasets fulfilled the inclusion criteria. From all studies, the number of planned and placed implants was available. With two exceptions, no RCTs were found, and these two studies did not compare different numbers of implants per prosthesis. Eight studies were retrospective; all the others were prospective. Fourteen studies calculated cumulative survival rates for 5 and more years. From these data, the average survival rate was between 90% and 100%. The analysis of the selected articles revealed a clear tendency to plan 4 to 6 implants per prosthesis. For supporting a cross-arch fixed prosthesis in the maxilla, the variation is slightly greater. CONCLUSIONS In spite of a dispersion of results, similar outcomes are reported with regard to survival and number of implants per jaw. Since the 1990s, it was proven that there is no need to install as many implants as possible in the available jawbone. The overwhelming majority of articles dealing with standard surgical procedures to rehabilitate edentulous jaws uses 4 to 6 implants.
Resumo:
PURPOSE Blood loss and blood substitution are associated with higher morbidity after major abdominal surgery. During major liver resection, low local venous pressure, has been shown to reduce blood loss. Ambiguity persists concerning the impact of local venous pressure on blood loss during open radical cystectomy. We aimed to determine the association between intraoperative blood loss and pelvic venous pressure (PVP) and determine factors affecting PVP. MATERIAL AND METHODS In the frame of a single-center, double-blind, randomized trial, PVP was measured in 82 patients from a norepinephrine/low-volume group and in 81 from a control group with liberal hydration. For this secondary analysis, patients from each arm were stratified into subgroups with PVP <5 mmHg or ≥5 mmHg measured after cystectomy (optimal cut-off value for discrimination of patients with relevant blood loss according to the Youden's index). RESULTS Median blood loss was 800 ml [range: 300-1600] in 55/163 patients (34%) with PVP <5 mmHg and 1200 ml [400-3000] in 108/163 patients (66%) with PVP ≥5 mmHg; (P<0.0001). A PVP <5 mmHg was measured in 42/82 patients (51%) in the norepinephrine/low-volume group and 13/81 (16%) in the control group (P<0.0001). PVP dropped significantly after removal of abdominal packing and abdominal lifting in both groups at all time points (at begin and end of pelvic lymph node dissection, end of cystectomy) (P<0.0001). No correlation between PVP and central venous pressure could be detected. CONCLUSIONS Blood loss was significantly reduced in patients with low PVP. Factors affecting PVP were fluid management and abdominal packing.
Resumo:
PURPOSE To systematically evaluate the dependence of intravoxel-incoherent-motion (IVIM) parameters on the b-value threshold separating the perfusion and diffusion compartment, and to implement and test an algorithm for the standardized computation of this threshold. METHODS Diffusion weighted images of the upper abdomen were acquired at 3 Tesla in eleven healthy male volunteers with 10 different b-values and in two healthy male volunteers with 16 different b-values. Region-of-interest IVIM analysis was applied to the abdominal organs and skeletal muscle with a systematic increase of the b-value threshold for computing pseudodiffusion D*, perfusion fraction Fp , diffusion coefficient D, and the sum of squared residuals to the bi-exponential IVIM-fit. RESULTS IVIM parameters strongly depended on the choice of the b-value threshold. The proposed algorithm successfully provided optimal b-value thresholds with the smallest residuals for all evaluated organs [s/mm2]: e.g., right liver lobe 20, spleen 20, right renal cortex 150, skeletal muscle 150. Mean D* [10(-3) mm(2) /s], Fp [%], and D [10(-3) mm(2) /s] values (±standard deviation) were: right liver lobe, 88.7 ± 42.5, 22.6 ± 7.4, 0.73 ± 0.12; right renal cortex: 11.5 ± 1.8, 18.3 ± 2.9, 1.68 ± 0.05; spleen: 41.9 ± 57.9, 8.2 ± 3.4, 0.69 ± 0.07; skeletal muscle: 21.7 ± 19.0; 7.4 ± 3.0; 1.36 ± 0.04. CONCLUSION IVIM parameters strongly depend upon the choice of the b-value threshold used for computation. The proposed algorithm may be used as a robust approach for IVIM analysis without organ-specific adaptation. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
An in-depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of observations to obtain the most accurate orbit propagation. The accuracy of the results of an orbit determination/ improvement process depends on: tracklet length, number of observations, type of orbit, astrometric error, time interval between tracklets and observation geometry. The latter depends on the position of the object along its orbit and the location of the observing station. This covariance analysis aims to optimize the observation strategy taking into account the influence of the orbit shape, of the relative object-observer geometry and the interval between observations.
Resumo:
The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for space debris using optical sensors. The debris objects are discovered during systematic survey observations. In general, the result of a discovery consists in only a short observation arc, or tracklet, which is used to perform a first orbit determination in order to be able to observe t he object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In order to obtain the most accurate orbit within the time available it is necessary to optimize the follow-up observations strategy. In this paper an in‐depth study, using simulations and covariance analysis, is performed to identify the optimal sequence of follow-up observations to obtain the most accurate orbit propagation to be used for the space debris catalogue maintenance. The main factors that determine the accuracy of the results of an orbit determination/improvement process are: tracklet length, number of observations, type of orbit, astrometric error of the measurements, time interval between tracklets, and the relative position of the object along its orbit with respect to the observing station. The main aim of the covariance analysis is to optimize the follow-up strategy as a function of the object-observer geometry, the interval between follow-up observations and the shape of the orbit. This an alysis can be applied to every orbital regime but particular attention was dedicated to geostationary, Molniya, and geostationary transfer orbits. Finally the case with more than two follow-up observations and the influence of a second observing station are also analyzed.
Resumo:
INTRODUCTION In iliosacral screw fixation, the dimensions of solely intraosseous (secure) pathways, perpendicular to the ilio-sacral articulation (optimal) with corresponding entry (EP) and aiming points (AP) on lateral fluoroscopic projections, and the factors (demographic, anatomic) influencing these have not yet been described. METHODS In 100 CTs of normal pelvises, the height and width of the secure and optimal pathways were measured on axial and coronal views bilaterally (total measurements: n=200). Corresponding EP and AP were defined as either the location of the screw head or tip at the crossing of lateral innominate bones' cortices (EP) and sacral midlines (AP) within the centre of the pathway, respectively. EP and AP were transferred to the sagittal pelvic view using a coordinate system with the zero-point in the centre of the posterior cortex of the S1 vertebral body (x-axis parallel to upper S1 endplate). Distances are expressed in relation to the anteroposterior distance of the S1 upper endplate (in %). The influence of demographic (age, gender, side) and/or anatomic (PIA=pelvic incidence angle; TCA=transversal curvature angle, PID-Index=pelvic incidence distance-index; USW=unilateral sacral width-index) parameters on pathway dimensions and positions of EP and AP were assessed (multivariate analysis). RESULTS The width, height or both factors of the pathways were at least 7mm or more in 32% and 53% or 20%, respectively. The EP was on average 14±24% behind the centre of the posterior S1 cortex and 41±14% below it. The AP was on average 53±7% in the front of the centre of the posterior S1 cortex and 11±7% above it. PIA influenced the width, TCA, PID-Index the height of the pathways. PIA, PID-Index, and USW-Index significantly influenced EP and AP. Age, gender, and TCA significantly influenced EP. CONCLUSION Secure and optimal placement of screws of at least 7mm in diameter will be unfeasible in the majority of patients. Thoughtful preoperative planning of screw placement on CT scans is advisable to identify secure pathways with an optimal direction. For this purpose, the presented methodology of determining and transferring EPs and APs of corresponding pathways to the sagittal pelvic view using a coordinate system may be useful.
Resumo:
OBJECTIVE The purpose of this study was to investigate outcomes of patients treated with prasugrel or clopidogrel after percutaneous coronary intervention (PCI) in a nationwide acute coronary syndrome (ACS) registry. BACKGROUND Prasugrel was found to be superior to clopidogrel in a randomized trial of ACS patients undergoing PCI. However, little is known about its efficacy in everyday practice. METHODS All ACS patients enrolled in the Acute Myocardial Infarction in Switzerland (AMIS)-Plus registry undergoing PCI and being treated with a thienopyridine P2Y12 inhibitor between January 2010-December 2013 were included in this analysis. Patients were stratified according to treatment with prasugrel or clopidogrel and outcomes were compared using propensity score matching. The primary endpoint was a composite of death, recurrent infarction and stroke at hospital discharge. RESULTS Out of 7621 patients, 2891 received prasugrel (38%) and 4730 received clopidogrel (62%). Independent predictors of in-hospital mortality were age, Killip class >2, STEMI, Charlson comorbidity index >1, and resuscitation prior to admission. After propensity score matching (2301 patients per group), the primary endpoint was significantly lower in prasugrel-treated patients (3.0% vs 4.3%; p=0.022) while bleeding events were more frequent (4.1% vs 3.0%; p=0.048). In-hospital mortality was significantly reduced (1.8% vs 3.1%; p=0.004), but no significant differences were observed in rates of recurrent infarction (0.8% vs 0.7%; p=1.00) or stroke (0.5% vs 0.6%; p=0.85). In a predefined subset of matched patients with one-year follow-up (n=1226), mortality between discharge and one year was not significantly reduced in prasugrel-treated patients (1.3% vs 1.9%, p=0.38). CONCLUSIONS In everyday practice in Switzerland, prasugrel is predominantly used in younger patients with STEMI undergoing primary PCI. A propensity score-matched analysis suggests a mortality benefit from prasugrel compared with clopidogrel in these patients.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.
Resumo:
OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.
Resumo:
Many attempts have already been made to detect exomoons around transiting exoplanets, but the first confirmed discovery is still pending. The experiences that have been gathered so far allow us to better optimize future space telescopes for this challenge already during the development phase. In this paper we focus on the forthcoming CHaraterising ExOPlanet Satellite (CHEOPS), describing an optimized decision algorithm with step-by-step evaluation, and calculating the number of required transits for an exomoon detection for various planet moon configurations that can be observable by CHEOPS. We explore the most efficient way for such an observation to minimize the cost in observing time. Our study is based on PTV observations (photocentric transit timing variation) in simulated CHEOPS data, but the recipe does not depend on the actual detection method, and it can be substituted with, e.g., the photodynamical method for later applications. Using the current state-of-the-art level simulation of CHEOPS data we analyzed transit observation sets for different star planet moon configurations and performed a bootstrap analysis to determine their detection statistics. We have found that the detection limit is around an Earth-sized moon. In the case of favorable spatial configurations, systems with at least a large moon and a Neptune-sized planet, an 80% detection chance requires at least 5-6 transit observations on average. There is also a nonzero chance in the case of smaller moons, but the detection statistics deteriorate rapidly, while the necessary transit measurements increase quickly. After the CoRoT and Kepler spacecrafts, CHEOPS will be the next dedicated space telescope that will observe exoplanetary transits and characterize systems with known Doppler-planets. Although it has a smaller aperture than Kepler (the ratio of the mirror diameters is about 1/3) and is mounted with a CCD that is similar to Kepler's, it will observe brighter stars and operate with larger sampling rate; therefore, the detection limit for an exomoon can be the same as or better, which will make CHEOPS a competitive instruments in the quest for exomoons.