910 resultados para Multiple Baseline Design
Resumo:
Modern telecommunication equipment requires components that operate in many different frequency bands and support multiple communication standards, to cope with the growing demand for higher data rate. Also, a growing number of standards are adopting the use of spectrum efficient digital modulations, such as quadrature amplitude modulation (QAM) and orthogonal frequency division multiplexing (OFDM). These modulation schemes require accurate quadrature oscillators, which makes the quadrature oscillator a key block in modern radio frequency (RF) transceivers. The wide tuning range characteristics of inductorless quadrature oscillators make them natural candidates, despite their higher phase noise, in comparison with LC-oscillators. This thesis presents a detailed study of inductorless sinusoidal quadrature oscillators. Three quadrature oscillators are investigated: the active coupling RC-oscillator, the novel capacitive coupling RCoscillator, and the two-integrator oscillator. The thesis includes a detailed analysis of the Van der Pol oscillator (VDPO). This is used as a base model oscillator for the analysis of the coupled oscillators. Hence, the three oscillators are approximated by the VDPO. From the nonlinear Van der Pol equations, the oscillators’ key parameters are obtained. It is analysed first the case without component mismatches and then the case with mismatches. The research is focused on determining the impact of the components’ mismatches on the oscillator key parameters: frequency, amplitude-, and quadrature-errors. Furthermore, the minimization of the errors by adjusting the circuit parameters is addressed. A novel quadrature RC-oscillator using capacitive coupling is proposed. The advantages of using the capacitive coupling are that it is noiseless, requires a small area, and has low power dissipation. The equations of the oscillation amplitude, frequency, quadrature-error, and amplitude mismatch are derived. The theoretical results are confirmed by simulation and by measurement of two prototypes fabricated in 130 nm standard complementary metal-oxide-semiconductor (CMOS) technology. The measurements reveal that the power increase due to the coupling is marginal, leading to a figure-of-merit of -154.8 dBc/Hz. These results are consistent with the noiseless feature of this coupling and are comparable to those of the best state-of-the-art RC-oscillators, in the GHz range, but with the lowest power consumption (about 9 mW). The results for the three oscillators show that the amplitude- and the quadrature-errors are proportional to the component mismatches and inversely proportional to the coupling strength. Thus, increasing the coupling strength decreases both the amplitude- and quadrature-errors. With proper coupling strength, a quadrature error below 1° and amplitude imbalance below 1% are obtained. Furthermore, the simulations show that increasing the coupling strength reduces the phase noise. Hence, there is no trade-off between phase noise and quadrature error. In the twointegrator oscillator study, it was found that the quadrature error can be eliminated by adjusting the transconductances to compensate the capacitance mismatch. However, to obtain outputs in perfect quadrature one must allow some amplitude error.
Resumo:
Existing wireless networks are characterized by a fixed spectrum assignment policy. However, the scarcity of available spectrum and its inefficient usage demands for a new communication paradigm to exploit the existing spectrum opportunistically. Future Cognitive Radio (CR) devices should be able to sense unoccupied spectrum and will allow the deployment of real opportunistic networks. Still, traditional Physical (PHY) and Medium Access Control (MAC) protocols are not suitable for this new type of networks because they are optimized to operate over fixed assigned frequency bands. Therefore, novel PHY-MAC cross-layer protocols should be developed to cope with the specific features of opportunistic networks. This thesis is mainly focused on the design and evaluation of MAC protocols for Decentralized Cognitive Radio Networks (DCRNs). It starts with a characterization of the spectrum sensing framework based on the Energy-Based Sensing (EBS) technique considering multiple scenarios. Then, guided by the sensing results obtained by the aforementioned technique, we present two novel decentralized CR MAC schemes: the first one designed to operate in single-channel scenarios and the second one to be used in multichannel scenarios. Analytical models for the network goodput, packet service time and individual transmission probability are derived and used to compute the performance of both protocols. Simulation results assess the accuracy of the analytical models as well as the benefits of the proposed CR MAC schemes.
Resumo:
PhD thesis in Bioengineering
Resumo:
Dissertação de mestrado em Advanced Optometry
Resumo:
"A workshop within the 19th International Conference on Applications and Theory of Petri Nets - ICATPN’1998"
Resumo:
This Study assessed the development of sludge treatment and reuse policy since the original 1993 National Sludge Strategy Report (Weston-FTA, 1993). A review of the 48 sludge treatment centres, current wastewater treatment systems and current or planned sludge treatment and reuse systems was carried out Sludges from all Regional Sludge Treatment Centres (areas) were characterised through analysis of selected parameters. There have been many changes to the original policy, as a result of boundary reviews, delays in developing sludge management plans, development in technology and changes in tendering policy, most notably a move to design-build-operate (DBO) projects. As a result, there are now 35 designated Hub Centres. Only 5 of the Hub Centres are producing Class A Biosolids. These are Ringsend, Killamey, Carlow, Navan and Osberstown. Ringsend is the only Hub Centre that is fully operational, treating sludge from surrounding regions by Thermal Drying. Killamey is producing Class A Biosolids using Autothermal Thermophilic Aerobic Digestion (ATAD) but is not, as yet, treating imported sludge. The remaining three plants are producing Class A Biosolids using Alkaline Stabilisation. Anaerobic Digestion with post pasteurisation is the most common form of sludge treatment, with 11 Hub Centres proposing to use it. One plant is using ATAD, two intend to use Alkaline Stabilisation, seven have selected Thermal Drying and three have selected Composting. While the remaining plants have not decided which sludge treatment to select, this is because of incomplete Sludge Management Plans and on DBO contracts. Analysis of sludges from the Hub Centres showed that all Irish sewage sludge is safe for agricultural reuse as defined by the Waste Management Regulations {Use of Sewage Sludge in Agriculture) (S.I. 267/2001), providing that a nutrient management plan is taken into consideration and that the soil limits of the 1998 (S.I. 148/1998) Waste Management Regulations are not exceeded.
Resumo:
STUDY DESIGN: Prospective, controlled, observational outcome study using clinical, radiographic, and patient/physician-based questionnaire data, with patient outcomes at 12 months follow-up. OBJECTIVE: To validate appropriateness criteria for low back surgery. SUMMARY OF BACKGROUND DATA: Most surgical treatment failures are attributed to poor patient selection, but no widely accepted consensus exists on detailed indications for appropriate surgery. METHODS: Appropriateness criteria for low back surgery have been developed by a multispecialty panel using the RAND appropriateness method. Based on panel criteria, a prospective study compared outcomes of patients appropriately and inappropriately treated at a single institution with 12 months follow-up assessment. Included were patients with low back pain and/or sciatica referred to the neurosurgical department. Information about symptoms, neurologic signs, the health-related quality of life (SF-36), disability status (Roland-Morris), and pain intensity (VAS) was assessed at baseline, at 6 months, and at 12 months follow-up. The appropriateness criteria were administered prospectively to each clinical situation and outside of the clinical setting, with the surgeon and patients blinded to the results of the panel decision. The patients were further stratified into 2 groups: appropriate treatment group (ATG) and inappropriate treatment group (ITG). RESULTS: Overall, 398 patients completed all forms at 12 months. Treatment was considered appropriate for 365 participants and inappropriate for 33 participants. The mean improvement in the SF-36 physical component score at 12 months was significantly higher in the ATG (mean: 12.3 points) than in the ITG (mean: 6.8 points) (P = 0.01), as well as the mean improvement in the SF-36 mental component score (ATG mean: 5.0 points; ITG mean: -0.5 points) (P = 0.02). Improvement was also significantly higher in the ATG for the mean VAS back pain (ATG mean: 2.3 points; ITG mean: 0.8 points; P = 0.02) and Roland-Morris disability score (ATG mean: 7.7 points; ITG mean: 4.2 points; P = 0.004). The ATG also had a higher improvement in mean VAS for sciatica (4.0 points) than the ITG (2.8 points), but the difference was not significant (P = 0.08). The SF-36 General Health score declined in both groups after 12 months, however, the decline was worse in the ITG (mean decline: 8.2 points) than in the ATG (mean decline: 1.2 points) (P = 0.04). Overall, in comparison to ITG patients, ATG patients had significantly higher improvement at 12 months, both statistically and clinically. CONCLUSION: In comparison to previously reported literature, our study is the first to assess the utility of appropriateness criteria for low back surgery at 1-year follow-up with multiple outcome dimensions. Our results confirm the hypothesis that application of appropriateness criteria can significantly improve patient outcomes.
Resumo:
Un reto al ejecutar las aplicaciones en un cluster es lograr mejorar las prestaciones utilizando los recursos de manera eficiente, y este reto es mayor al utilizar un ambiente distribuido. Teniendo en cuenta este reto, se proponen un conjunto de reglas para realizar el cómputo en cada uno de los nodos, basado en el análisis de cómputo y comunicaciones de las aplicaciones, se analiza un esquema de mapping de celdas y un método para planificar el orden de ejecución, tomando en consideración la ejecución por prioridad, donde las celdas de fronteras tienen una mayor prioridad con respecto a las celdas internas. En la experimentación se muestra el solapamiento del computo interno con las comunicaciones de las celdas fronteras, obteniendo resultados donde el Speedup aumenta y los niveles de eficiencia se mantienen por encima de un 85%, finalmente se obtiene ganancias de los tiempos de ejecución, concluyendo que si se puede diseñar un esquemas de solapamiento que permita que la ejecución de las aplicaciones SPMD en un cluster se hagan de forma eficiente.
Resumo:
Background: Infection with EBV and a lack in vitamin D may be important environmental triggers of MS. 1,25-(OH)2D3 mediates a shift of antigen presenting cells (APC) and CD4+ T cells to a less inflammatory profile. Although CD8+ T cells do express the vitamin D receptor, a direct effect of 1,25(OH)2D3 on these cells has not been demonstrated until now. Since CD8+ T cells are important immune mediators of the inflammatory response in MS, we examined whether vitamin D directly affects the CD8+ T cell response, and more specifically if it modulates the EBV-specific CD8+ T cell response. Material and Methods: To explore whether the vitamin D status may influence the pattern of the EBV-specific CD8+ T cell response, PBMC of 10 patients with early MS and 10 healthy controls (HC) were stimulated with a pool of immunodominant 8-10 mer peptide epitopes known to elicit CD8+ T cell responses. PBMC were stimulated with this EBV CD8 peptide pool, medium (negative control) or anti- CD3/anti-CD28 beads (positive control). The following assays were performed: ELISPOT to assess the secretion of IFN-gamma by T cells in general; cytometric beads array (CBA) and ELISA to determine whichcytokines were released by EBV-specific CD8+ T cells after six days of culture; and intracellular cytokine staining assay to determine by which subtype of T cells secreted given cytokines. To examine whether vitamin D could directly modulate CD8+ T cell immune responses, we depleted CD4+ T cells using negative selection. Results: We found that pre-treatment of vitamin D had an antiinflammatory action on both EBV-specific CD8+ T cells and on CD3/ CD28-stimulated T cells: secretion of pro-inflammatory cytokines (IFNgamma and TNF-alpha) was decreased, whereas secretion of antiinflammatory cytokines (IL-5 and TGF-beta) was increased. At baseline, CD8+ T cells of early MS patients showed a higher secretion of TNFalpha and lower secretion of IL-5. Addition of vitamin D did not restore the same levels of both cytokines as compared to HC. Vitamin D-pretreated CD8+T cells exhibited a decreased secretion of IFN-gamma and TNF-alpha, even after depletion of CD4+ T cells from culture. Conclusion: Vitamin D has a direct anti-inflammatory effect on CD8+ T cells independently from CD4+ T cells. CD8+ T cells of patients with earlyMS are less responsive to the inflammatory effect of vitamin D than HC, pointing toward an intrinsic dysregulation of CD8+ T cells. The modulation of EBV-specific CD8+T cells by vitaminDsuggests that there may be interplay between these twomajor environmental factors of MS. This study was supported by a grant from the Swiss National Foundation (PP00P3-124893), and by an unrestricted research grant from Bayer to RDP.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
While mobile technologies can provide great personalized services for mobile users, they also threaten their privacy. Such personalization-privacy paradox are particularly salient for context aware technology based mobile applications where user's behaviors, movement and habits can be associated with a consumer's personal identity. In this thesis, I studied the privacy issues in the mobile context, particularly focus on an adaptive privacy management system design for context-aware mobile devices, and explore the role of personalization and control over user's personal data. This allowed me to make multiple contributions, both theoretical and practical. In the theoretical world, I propose and prototype an adaptive Single-Sign On solution that use user's context information to protect user's private information for smartphone. To validate this solution, I first proved that user's context is a unique user identifier and context awareness technology can increase user's perceived ease of use of the system and service provider's authentication security. I then followed a design science research paradigm and implemented this solution into a mobile application called "Privacy Manager". I evaluated the utility by several focus group interviews, and overall the proposed solution fulfilled the expected function and users expressed their intentions to use this application. To better understand the personalization-privacy paradox, I built on the theoretical foundations of privacy calculus and technology acceptance model to conceptualize the theory of users' mobile privacy management. I also examined the role of personalization and control ability on my model and how these two elements interact with privacy calculus and mobile technology model. In the practical realm, this thesis contributes to the understanding of the tradeoff between the benefit of personalized services and user's privacy concerns it may cause. By pointing out new opportunities to rethink how user's context information can protect private data, it also suggests new elements for privacy related business models.
Resumo:
DREAM is an initiative that allows researchers to assess how well their methods or approaches can describe and predict networks of interacting molecules [1]. Each year, recently acquired datasets are released to predictors ahead of publication. Researchers typically have about three months to predict the masked data or network of interactions, using any predictive method. Predictions are assessed prior to an annual conference where the best predictions are unveiled and discussed. Here we present the strategy we used to make a winning prediction for the DREAM3 phosphoproteomics challenge. We used Amelia II, a multiple imputation software method developed by Gary King, James Honaker and Matthew Blackwell[2] in the context of social sciences to predict the 476 out of 4624 measurements that had been masked for the challenge. To chose the best possible multiple imputation parameters to apply for the challenge, we evaluated how transforming the data and varying the imputation parameters affected the ability to predict additionally masked data. We discuss the accuracy of our findings and show that multiple imputations applied to this dataset is a powerful method to accurately estimate the missing data. We postulate that multiple imputations methods might become an integral part of experimental design as a mean to achieve cost savings in experimental design or to increase the quantity of samples that could be handled for a given cost.
Resumo:
Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.
Resumo:
Background. A software based tool has been developed (Optem) to allow automatize the recommendations of the Canadian Multiple Sclerosis Working Group for optimizing MS treatment in order to avoid subjective interpretation. METHODS: Treatment Optimization Recommendations (TORs) were applied to our database of patients treated with IFN beta1a IM. Patient data were assessed during year 1 for disease activity, and patients were assigned to 2 groups according to TOR: "change treatment" (CH) and "no change treatment" (NCH). These assessments were then compared to observed clinical outcomes for disease activity over the following years. RESULTS: We have data on 55 patients. The "change treatment" status was assigned to 22 patients, and "no change treatment" to 33 patients. The estimated sensitivity and specificity according to last visit status were 73.9% and 84.4%. During the following years, the Relapse Rate was always higher in the "change treatment" group than in the "no change treatment" group (5 y; CH: 0.7, NCH: 0.07; p < 0.001, 12 m - last visit; CH: 0.536, NCH: 0.34). We obtained the same results with the EDSS (4 y; CH: 3.53, NCH: 2.55, annual progression rate in 12 m - last visit; CH: 0.29, NCH: 0.13). CONCLUSION: Applying TOR at the first year of therapy allowed accurate prediction of continued disease activity in relapses and disability progression.
Resumo:
Retinitis Pigmentosa (RP) is a heterogeneous group of inherited retinal dystrophies characterised ultimately by the loss of photoreceptor cells. RP is the leading cause of visual loss in individuals younger than 60 years, with a prevalence of about 1 in 4000. The molecular genetic diagnosis of autosomal recessive RP (arRP) is challenging due to the large genetic and clinical heterogeneity. Traditional methods for sequencing arRP genes are often laborious and not easily available and a screening technique that enables the rapid detection of the genetic cause would be very helpful in the clinical practice. The goal of this study was to develop and apply microarray-based resequencing technology capable of detecting both known and novel mutations on a single high-throughput platform. Hence, the coding regions and exon/intron boundaries of 16 arRP genes were resequenced using microarrays in 102 Spanish patients with clinical diagnosis of arRP. All the detected variations were confirmed by direct sequencing and potential pathogenicity was assessed by functional predictions and frequency in controls. For validation purposes 4 positive controls for variants consisting of previously identified changes were hybridized on the array. As a result of the screening, we detected 44 variants, of which 15 are very likely pathogenic detected in 14 arRP families (14%). Finally, the design of this array can easily be transformed in an equivalent diagnostic system based on targeted enrichment followed by next generation sequencing.