934 resultados para Data anonymization and sanitization
Resumo:
Patent foramen ovale and obstructive sleep apnoea are frequently encountered in the general population. Owing to their prevalence, they may coexist fortuitously; however, the prevalence of patent foramen ovale seems to be higher in patients with obstructive sleep apnoea. We have reviewed the epidemiological data, pathophysiology, and the diagnostic and therapeutic options for both patent foramen ovale and obstructive sleep apnoea. We focus on the interesting pathophysiological links that could explain a potential association between both pathologies and their implications, especially on the risk of stroke.
Resumo:
Detecting local differences between groups of connectomes is a great challenge in neuroimaging, because the large number of tests that have to be performed and the impact on multiplicity correction. Any available information should be exploited to increase the power of detecting true between-group effects. We present an adaptive strategy that exploits the data structure and the prior information concerning positive dependence between nodes and connections, without relying on strong assumptions. As a first step, we decompose the brain network, i.e., the connectome, into subnetworks and we apply a screening at the subnetwork level. The subnetworks are defined either according to prior knowledge or by applying a data driven algorithm. Given the results of the screening step, a filtering is performed to seek real differences at the node/connection level. The proposed strategy could be used to strongly control either the family-wise error rate or the false discovery rate. We show by means of different simulations the benefit of the proposed strategy, and we present a real application of comparing connectomes of preschool children and adolescents.
Resumo:
PURPOSE: Quality of care and its measurement represent a considerable challenge for pediatric smaller-scale comprehensive cancer centers (pSSCC) providing surgical oncology services. It remains unclear whether center size and/or yearly case-flow numbers influence the quality of care, and therefore impact outcomes for this population of patients. PATIENTS AND METHODS: We performed a 14-year, retrospective, single-center analysis, assessing adherence to treatment protocols and surgical adverse events as quality indicators in abdominal and thoracic pediatric solid tumor surgery. RESULTS: Forty-eight patients, enrolled in a research-associated treatment protocol, underwent 51 cancer-oriented surgical procedures. All the protocols contain precise technical criteria, indications, and instructions for tumor surgery. Overall, compliance with such items was very high, with 997/1,035 items (95 %) meeting protocol requirements. There was no surgical mortality. Twenty-one patients (43 %) had one or more complications, for a total of 34 complications (66 % of procedures). Overall, 85 % of complications were grade 1 or 2 according to Clavien-Dindo classification requiring observation or minor medical treatment. Case-sample and outcome/effectiveness data were comparable to published series. Overall, our data suggest that even with the modest caseload of a pSSCC within a Swiss tertiary academic hospital, compliance with international standards can be very high, and the incidence of adverse events can be kept minimal. CONCLUSION: Open and objective data sharing, and discussion between pSSCCs, will ultimately benefit our patient populations. Our study is an initial step towards the enhancement of critical self-review and quality-of-care measurements in this setting.
Resumo:
The absolute K magnitudes and kinematic parameters of about 350 oxygen-rich Long-Period Variable stars are calibrated, by means of an up-to-date maximum-likelihood method, using HIPPARCOS parallaxes and proper motions together with radial velocities and, as additional data, periods and V-K colour indices. Four groups, differing by their kinematics and mean magnitudes, are found. For each of them, we also obtain the distributions of magnitude, period and de-reddened colour of the base population, as well as de-biased period-luminosity-colour relations and their two-dimensional projections. The SRa semiregulars do not seem to constitute a separate class of LPVs. The SRb appear to belong to two populations of different ages. In a PL diagram, they constitute two evolutionary sequences towards the Mira stage. The Miras of the disk appear to pulsate on a lower-order mode. The slopes of their de-biased PL and PC relations are found to be very different from the ones of the Oxygen Miras of the LMC. This suggests that a significant number of so-called Miras of the LMC are misclassified. This also suggests that the Miras of the LMC do not constitute a homogeneous group, but include a significant proportion of metal-deficient stars, suggesting a relatively smooth star formation history. As a consequence, one may not trivially transpose the LMC period-luminosity relation from one galaxy to the other.
Resumo:
Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
The activity of radiopharmaceuticals in nuclear medicine is measured before patient injection with radionuclide calibrators. In Switzerland, the general requirements for quality controls are defined in a federal ordinance and a directive of the Federal Office of Metrology (METAS) which require each instrument to be verified. A set of three gamma sources (Co-57, Cs-137 and Co-60) is used to verify the response of radionuclide calibrators in the gamma energy range of their use. A beta source, a mixture of (90)Sr and (90)Y in secular equilibrium, is used as well. Manufacturers are responsible for the calibration factors. The main goal of the study was to monitor the validity of the calibration factors by using two sources: a (90)Sr/(90)Y source and a (18)F source. The three types of commercial radionuclide calibrators tested do not have a calibration factor for the mixture but only for (90)Y. Activity measurements of a (90)Sr/(90)Y source with the (90)Y calibration factor are performed in order to correct for the extra-contribution of (90)Sr. The value of the correction factor was found to be 1.113 whereas Monte Carlo simulations of the radionuclide calibrators estimate the correction factor to be 1.117. Measurements with (18)F sources in a specific geometry are also performed. Since this radionuclide is widely used in Swiss hospitals equipped with PET and PET-CT, the metrology of the (18)F is very important. The (18)F response normalized to the (137)Cs response shows that the difference with a reference value does not exceed 3% for the three types of radionuclide calibrators.
Resumo:
Voting Advice Applications (VAAs) have become a central component of election campaigns worldwide. Through matching political preferences of voters to parties and candidates, the web application grants voters a look into their political mirror and reveals the most suitable political choices to them in terms of policy congruence. Both the dense and concise information on the electoral offer and the comparative nature of the application make VAAs an unprecedented information source for electoral decision making. In times where electoral choices are found to be highly individualized and driven by political issue positions, an ever increasing number of voters turn to VAAs before casting their ballots. With VAAs in high demand, the question of their effects on voters has become a pressing research topic. In various countries, survey research has been used to proclaim an impact of VAAs on electoral behavior, yet practically all studies fail to provide the scientific evidence that would allow for making such claims. In this thesis, I set out to systematically establish the causal link between VAA use and electoral behavior, using various data sources and appropriate statistical techniques in doing so. The focus lies on the Swiss VAA smartvote, introduced in the forefront of the 2003 Swiss federal elections and meanwhile an integral part of the national election campaign, smartvote has produced over a million voting recommendations in the last Swiss federal elections to an active electorate of two million, potentially guiding a vast amount of voters in their choices on the ballot. In order to determine the effect of the VAA on electoral behavior, I analyze both voting preferences and choice among Swiss voters during two consecutive election periods. First, I introduce statistical techniques to adequately examine VAA effects in observational studies and use them to demonstrate that voters who used smartvote prior to the 2007 Swiss federal elections were significantly more likely to swing vote in the elections than non- users. Second, I analyze preference voting during the same election and show that the smartvote voting recommendation inclines politically knowledgeable voters to modify their ballots and cast candidate specific preference votes. Third, to further tackle the indication that smartvote use affects the preference structure of voters, I employ an experimental research design to demonstrate that voters who use the application tend to strengthen their vote propensities for their most preferred party and adapt their overall party preferences in a way that they consider more than one party as eligible vote options after engaging with the application. Finally, vote choice is examined for the 2011 Swiss federal election, showing once more that the VAA initiated a change of party choice among voters. In sum, this thesis presents empirical evidence for the transformative effect of the Swiss VAA smartvote on the electoral behavior.
Resumo:
Achieving a high degree of dependability in complex macro-systems is challenging. Because of the large number of components and numerous independent teams involved, an overview of the global system performance is usually lacking to support both design and operation adequately. A functional failure mode, effects and criticality analysis (FMECA) approach is proposed to address the dependability optimisation of large and complex systems. The basic inductive model FMECA has been enriched to include considerations such as operational procedures, alarm systems. environmental and human factors, as well as operation in degraded mode. Its implementation on a commercial software tool allows an active linking between the functional layers of the system and facilitates data processing and retrieval, which enables to contribute actively to the system optimisation. The proposed methodology has been applied to optimise dependability in a railway signalling system. Signalling systems are typical example of large complex systems made of multiple hierarchical layers. The proposed approach appears appropriate to assess the global risk- and availability-level of the system as well as to identify its vulnerabilities. This enriched-FMECA approach enables to overcome some of the limitations and pitfalls previously reported with classical FMECA approaches.
Resumo:
A review of nearly three decades of cross-cultural research shows that this domain still has to address several issues regarding the biases of data collection and sampling methods, the lack of clear and consensual definitions of constructs and variables, and measurement invariance issues that seriously limit the comparability of results across cultures. Indeed, a large majority of the existing studies are still based on the anthropological model, which compares two cultures and mainly uses convenience samples of university students. This paper stresses the need to incorporate a larger variety of regions and cultures in the research designs, the necessity to theorize and identify a larger set of variables in order to describe a human environment, and the importance of overcoming methodological weaknesses to improve the comparability of measurement results. Cross-cultural psychology is at the next crossroads in it's development, and researchers can certainly make major contributions to this domain if they can address these weaknesses and challenges.
Resumo:
Knowledge of the reflectivity of the sediment-covered seabed is of significant importance to marine seismic data acquisition and interpretation as it governs the generation of reverberations in the water layer. In this context pertinent, but largely unresolved, questions concern the importance of the typically very prominent vertical seismic velocity gradients as well as the potential presence and magnitude of anisotropy in soft surficial seabed sediments. To address these issues, we explore the seismic properties of granulometric end-member-type clastic sedimentary seabed models consisting of sand, silt, and clay as well as scale-invariant stochastic layer sequences of these components characterized by realistic vertical gradients of the P- and S-wave velocities. Using effective media theory, we then assess the nature and magnitude of seismic anisotropy associated with these models. Our results indicate that anisotropy is rather benign for P-waves, and that the S-wave velocities in the axial directions differ only slightly. Because of the very high P- to S-wave velocity ratios in the vicinity of the seabed our models nevertheless suggest that S-wave triplications may occur at very small incidence angles. To numerically evaluate the P-wave reflection coefficient of our seabed models, we apply a frequency-slowness technique to the corresponding synthetic seismic wavefields. Comparison with analytical plane-wave reflection coefficients calculated for corresponding isotropic elastic half-space models shows that the differences tend to be most pronounced in the vicinity of the elastic equivalent of the critical angle as well as in the post-critical range. We also find that the presence of intrinsic anisotropy in the clay component of our layered models tends to dramatically reduce the overall magnitude of the P-wave reflection coefficient as well as its variation with incidence angle.
Resumo:
Statistical models allow the representation of data sets and the estimation and/or prediction of the behavior of a given variable through its interaction with the other variables involved in a phenomenon. Among other different statistical models, are the autoregressive state-space models (ARSS) and the linear regression models (LR), which allow the quantification of the relationships among soil-plant-atmosphere system variables. To compare the quality of the ARSS and LR models for the modeling of the relationships between soybean yield and soil physical properties, Akaike's Information Criterion, which provides a coefficient for the selection of the best model, was used in this study. The data sets were sampled in a Rhodic Acrudox soil, along a spatial transect with 84 points spaced 3 m apart. At each sampling point, soybean samples were collected for yield quantification. At the same site, soil penetration resistance was also measured and soil samples were collected to measure soil bulk density in the 0-0.10 m and 0.10-0.20 m layers. Results showed autocorrelation and a cross correlation structure of soybean yield and soil penetration resistance data. Soil bulk density data, however, were only autocorrelated in the 0-0.10 m layer and not cross correlated with soybean yield. The results showed the higher efficiency of the autoregressive space-state models in relation to the equivalent simple and multiple linear regression models using Akaike's Information Criterion. The resulting values were comparatively lower than the values obtained by the regression models, for all combinations of explanatory variables.
Resumo:
BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.
Resumo:
AbstractIn addition to genetic changes affecting the function of gene products, changes in gene expression have been suggested to underlie many or even most of the phenotypic differences among mammals. However, detailed gene expression comparisons were, until recently, restricted to closely related species, owing to technological limitations. Thus, we took advantage of the latest technologies (RNA-Seq) to generate extensive qualitative and quantitative transcriptome data for a unique collection of somatic and germline tissues from representatives of all major mammalian lineages (placental mammals, marsupials and monotremes) and birds, the evolutionary outgroup.In the first major project of my thesis, we performed global comparative analyses of gene expression levels based on these data. Our analyses provided fundamental insights into the dynamics of transcriptome change during mammalian evolution (e.g., the rate of expression change across species, tissues and chromosomes) and allowed the exploration of the functional relevance and phenotypic implications of transcription changes at a genome-wide scale (e.g., we identified numerous potentially selectively driven expression switches).In a second project of my thesis, which was also based on the unique transcriptome data generated in the context of the first project we focused on the evolution of alternative splicing in mammals. Alternative splicing contributes to transcriptome complexity by generating several transcript isoforms from a single gene, which can, thus, perform various functions. To complete the global comparative analysis of gene expression changes, we explored patterns of alternative splicing evolution. This work uncovered several general and unexpected patterns of alternative splicing evolution (e.g., we found that alternative splicing evolves extremely rapidly) as well as a large number of conserved alternative isoforms that may be crucial for the functioning of mammalian organs.Finally, the third and final project of my PhD consisted in analyzing in detail the unique functional and evolutionary properties of the testis by exploring the extent of its transcriptome complexity. This organ was previously shown to evolve rapidly both at the phenotypic and molecular level, apparently because of the specific pressures that act on this organ and are associated with its reproductive function. Moreover, my analyses of the amniote tissue transcriptome data described above, revealed strikingly widespread transcriptional activity of both functional and nonfunctional genomic elements in the testis compared to the other organs. To elucidate the cellular source and mechanisms underlying this promiscuous transcription in the testis, we generated deep coverage RNA-Seq data for all major testis cell types as well as epigenetic data (DNA and histone methylation) using the mouse as model system. The integration of these complete dataset revealed that meiotic and especially post-meiotic germ cells are the major contributors to the widespread functional and nonfunctional transcriptome complexity of the testis, and that this "promiscuous" spermatogenic transcription is resulting, at least partially, from an overall transcriptionally permissive chromatin state. We hypothesize that this particular open state of the chromatin results from the extensive chromatin remodeling that occurs during spermatogenesis which ultimately leads to the replacement of histones by protamines in the mature spermatozoa. Our results have important functional and evolutionary implications (e.g., regarding new gene birth and testicular gene expression evolution).Generally, these three large-scale projects of my thesis provide complete and massive datasets that constitute valuables resources for further functional and evolutionary analyses of mammalian genomes.
Resumo:
Soil properties play an important role in spatial variability of crop yield. However, a low spatial correlation has generally been observed between maps of crop yield and of soil properties. The objectives of the present investigation were to assess the spatial pattern variability of soil properties and of corn yield at the same sampling intensity, and evaluate its cause-and-effect relationships. The experimental site was structured in a grid of 100 referenced points, spaced at 10 m intervals along four parallel 250 m long rows spaced 4.5 m apart. Thus, points formed a rectangle containing four columns and 25 rows. Therefore, each sampling cell encompassed an area of 45 m² and consisted of five 10 m long crop rows, in which the referenced points represented the center. Samples were taken from the layers 0-0.1 m and 0.1-0.2 m. Soil physical and chemical properties were evaluated. Statistical analyses consisted of data description and geostatistics. The spatial dependence of corn yield and soil properties was confirmed. The hypothesis of this study was confirmed, i.e., when sampling the soil to determine the values of soil characteristics at similar to sampling intensity as for crop yield assessments, correlations between the spatial distribution of soil characteristics and crop yield were observed. The spatial distribution pattern of soil properties explained 65 % of the spatial distribution pattern of corn yield. The spatial distribution pattern of clay content and percentage of soil base saturation explained most of the spatial distribution pattern of corn yield.