968 resultados para Ammassi,Galassie,emissioni,non termiche,cluster,relitti,radio


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we have demonstrated how the existing programming environments, tools and middleware could be used for the study of execution performance of parallel and sequential applications on a non-dedicated cluster. A set of parallel and sequential benchmark applications selected for and used in the experiments were characterized, and experiment requirements shown. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We assert that companies can make more money and research institutions can improve their performance if inexpensive clusters and enterprise grids are exploited. In this paper, we have demonstrated that our claim is valid by showing the study of how programming environments, tools and middleware could be used for the execution of parallel and sequential applications, multiple parallel applications executing simultaneously on a non-dedicated cluster, and parallel applications on an enterprise grid and that the execution performance was improved. For this purpose an execution environment, and parallel and sequential benchmark applications selected for, and used in, the experiments were characterised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, approximately 29% of the world population use the Internet, against 38% in Brazil, which shows its importance in people's routine not only in Brazil, but also worldwide. Being the Internet a communication media, this research evaluates the influence of Interactivity as a factor to increase memorization of Internet sites. According to literature, multiway, immediacy and contingency factors increase Interactivity and sites that provide one or more of these factors influence memorization. 20 in-depth personal interviews were conducted to improve the understanding the issue, to identify leads and elaborate our hypothesis, followed by a quantitative survey of 300 people. Hypotheses were tested using Chi-square and a hierarchical and non-hierarchical cluster analysis. Results showed that the smaller the number of leads of a specific website, the larger are its memorization and access. The theoretical contribution of this investigation is that websites that offer fewer leads are more interactive, which causes them to be remembered. The managerial implication is that websites with a clear position and a small quantity of information or leads tend to be more remembered and accessed by internet users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Severe dengue virus (DENV) disease is associated with extensive immune activation, characterized by a cytokine storm. Previously, elevated lipopolysaccharide (LPS) levels in dengue were found to correlate with clinical disease severity. In the present cross-sectional study we identified markers of microbial translocation and immune activation, which are associated with severe manifestations of DENV infection. Methods: Serum samples from DENV-infected patients were collected during the outbreak in 2010 in the State of Sa˜o Paulo, Brazil. Levels of LPS, lipopolysaccharide binding protein (LBP), soluble CD14 (sCD14) and IgM and IgG endotoxin core antibodies were determined by ELISA. Thirty cytokines were quantified using a multiplex luminex system. Patients were classified according to the 2009 WHO classification and the occurrence of plasma leakage/shock and hemorrhage. Moreover, a (non-supervised) cluster analysis based on the expression of the quantified cytokines was applied to identify groups of patients with similar cytokine profiles. Markers of microbial translocation were linked to groups with similar clinical disease severity and clusters with similar cytokine profiles. Results: Cluster analysis indicated that LPS levels were significantly increased in patients with a profound pro-inflammatory cytokine profile. LBP and sCD14 showed significantly increased levels in patients with severe disease in the clinical classification and in patients with severe inflammation in the cluster analysis. With both the clinical classification and the cluster analysis, levels of IL-6, IL-8, sIL-2R, MCP-1, RANTES, HGF, G-CSF and EGF were associated with severe disease. Conclusions: The present study provides evidence that both microbial translocation and extensive immune activation occur during severe DENV infection and may play an important role in the pathogenesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi anni è aumentato l’interesse dell’industria verso lo sviluppo di tecnologie alternative ai trattamenti tradizionali degli alimenti. Tra le varie tecnologie non termiche troviamo il gas plasma. Il plasma è un gas ionizzato neutro, composto da diverse particelle. I principali agenti responsabili dell’azione battericida sembrano essere le specie reattive dell’ossigeno e dell’azoto, causando danni alle cellule microbiche. Recentemente si sta studiando l’“acqua plasmata attivata”. L’obiettivo generale di questa tesi è stato quello di verificare se trattamenti al plasma di soluzioni saline (NaCl 0.9%) possano “attivarle” dotandole di attività battericida nei confronti di un ceppo di Listeria monocytogenes (ceppo 56 Ly) e di stabilire se il materiale con cui sono costruiti gli elettrodi di un generatore di plasma del tipo DBD, possa influenzare l’efficacia delle soluzioni trattate. Si sono pertanto effettuati trattamenti al plasma di soluzioni saline utilizzando elettrodi di differenti materiali: vetro, ottone, acciaio, argento; le soluzioni così ottenute sono state analizzate in termini chimici, e se ne è valutata l’azione decontaminante nei confronti di Listeria monocytogenes 56 Ly nello stesso sistema modello e, in via preliminare, in sistema reale rappresentato da carote julienne deliberatamente contaminate con L. monocytogenes. Dai risultati ottenuti si è visto che la sensibilità di L. monocytogenes 56Ly alle soluzioni acquose trattate al plasma è influenzato sia dal tipo di materiale dell’elettrodo, sia dal tempo di esposizione. L’acciaio si è rivelato il materiale più efficace. Per quanto concerne il sistema reale, il lavaggio con acqua plasmata per 60 minuti ha determinato un livello di inattivazione di circa 1 ciclo logaritmico analogamente a quanto ottenuto con la soluzione di ipoclorito. In conclusione, i risultati ottenuti hanno evidenziato una minore efficacia dei trattamenti al plasma quando applicati ai sistemi reali, ma comunque il gas plasma ha delle buone potenzialità per la decontaminazione prodotti ortofrutticoli.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intensity non-uniformity (bias field) correction, contextual constraints over spatial intensity distribution and non-spherical cluster's shape in the feature space are incorporated into the fuzzy c-means (FCM) for segmentation of three-dimensional multi-spectral MR images. The bias field is modeled by a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of either intensity or membership are added into the FCM cost functions. Since the feature space is not isotropic, distance measures, other than the Euclidean distance, are used to account for the shape and volumetric effects of clusters in the feature space. The performance of segmentation is improved by combining the adaptive FCM scheme with the criteria used in Gustafson-Kessel (G-K) and Gath-Geva (G-G) algorithms through the inclusion of the cluster scatter measure. The performance of this integrated approach is quantitatively evaluated on normal MR brain images using the similarity measures. The improvement in the quality of segmentation obtained with our method is also demonstrated by comparing our results with those produced by FSL (FMRIB Software Library), a software package that is commonly used for tissue classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El estándar LTE se ha posicionado como una de las claves para que los operadores de telecomunicación puedan abordar de manera eficiente en costes el crecimiento de la demanda de tráfico móvil que se prevé para los próximos años, al ser una tecnología más escalable en el núcleo de la red y más flexible en la interfaz radio que sus predecesoras. En este sentido, es necesario también que los reguladores garanticen un acceso al espectro radioeléctrico adecuado, equitativo y no discriminatorio, que permita un entorno estable para el despliegue de redes de comunicaciones móviles avanzadas. Además de la flexibilización del marco regulador del espectro radioeléctrico en Europa, que ha permitido el despliegue de nuevas tecnologías en las bandas de frecuencia históricas de GSM, se ha puesto a disposición espectro adicional para sistemas IMT en nuevas bandas de frecuencia, lo que ha planteando a su vez nuevos retos para la tecnología y la regulación. La fragmentación del espectro disponible para comunicaciones móviles ha impulsado el desarrollo de técnicas de agregación de portadoras en las nuevas versiones del estándar LTE, que permiten explotar mejor los recursos radio en su conjunto. No obstante, el espectro inferior a 1 GHz sigue siendo escaso, ya que el tráfico móvil aumenta y la banda de 900 MHz aún se utiliza para servicios GSM, lo que no ha conseguido sino agravar la disputa entre los servicios de radiodifusión terrestre y de comunicaciones móviles por la parte superior de la banda UHF. En concreto, la banda de 700 MHz se perfila como una de las próximas para aumentar el espectro disponible para los servicios en movilidad, si bien su liberación por parte de las actuales redes de Televisión Digital Terrestre presenta no pocas dificultades en los Estados miembros en los que ésta es la principal plataforma audiovisual de acceso gratuito, abriendo un debate sobre el modelo audiovisual a largo plazo en Europa. Por otro lado, las políticas públicas de promoción del acceso a la banda ancha rápida y ultrarrápida de la presente década han establecido objetivos ambiciosos para el año 2020, tanto en el ámbito europeo como en los diferentes Estados miembros. La universalización del acceso a redes de banda ancha de al menos 30 Mbps constituye uno de los principales retos. Las expectativas generadas por la tecnología LTE y la puesta a disposición de nuevas bandas de frecuencia hace posible que los servicios de acceso fijo inalámbrico adquieran especial relevancia ante los objetivos de política pública establecidos que, como ha sido reconocido en diversas ocasiones, no podrán lograrse sino con un compendio de diferente tecnologías. Para esta Tesis Doctoral se han desarrollado una serie modelos tecnoeconómicos con el objetivo de realizar un análisis prospectivo que evalúa tres casos de especial relevancia en el despliegue de redes LTE: en primer lugar, la valoración económica de la banda de 700 MHz; en segundo lugar, la evaluación de modelos de negocio y reducción de costes considerando tecnologías femtocelulares; y finalmente, la viabilidad de las redes LTE de acceso fijo inalámbrico para el cierre de la brecha digital en el acceso a la banda ancha de 30 Mbps. En relación con la aplicación del análisis tecnoeconómico para la valoración del espectro de 700 MHz, los resultados obtenidos ponen de manifiesto dos cuestiones fundamentales. En primer lugar, la necesidad de asignar a los operadores más espectro para satisfacer las previsiones de demanda de tráfico móvil a medio plazo. En segundo, existe una diferencia notable en los costes de despliegue de una red LTE cuando se dispone de espectro en frecuencias inferiores a 1 GHz y cuando no, pero esta diferencia de costes disminuye a medida que se añade nuevo espectro sub-1GHz. De esta manera, la atribución de la banda de 700 MHz a servicios de comunicaciones móviles supone una reducción relevante en los costes de despliegue si el operador no dispone de espectro en la banda de 800 MHz, pero no así si ya dispone de espectro en bandas bajas para el despliegue. En este sentido, puede concluirse que el precio que los operadores estarán dispuestos a pagar por el espectro de la banda de 700 MHz dependerá de si ya tienen disponible espectro en la banda de 800 MHz. Sin embargo, dado que la competencia por ese espectro será menor, los ingresos esperables en las licitaciones de esta nueva banda serán en general menores, a pesar de que para algunos operadores este espectro sería tan valioso como el de 800 MHz. En segundo lugar, en relación con el despliegue de femtoceldas pueden extraerse algunas conclusiones en términos de ahorro de costes de despliegue y también de cara a la viabilidad de los modelos de negocio que posibilitan. El ahorro que supone la introducción de femtoceldas en el despliegue de una red LTE frente al caso de un despliegue exclusivamente macrocelular se ha demostrado que es mayor cuanto menor es el ancho de banda disponible para la red macrocelular. En esta línea, para un operador convergente el despliegue de femtoceldas tiene sentido económico si el ancho de banda disponible es escaso (en torno a 2x10 MHz), que, en el caso de España, puede reflejar el caso de los operadores del segmento fijo que son nuevos entrantes en el móvil. Por otro lado, los modelos de acceso abierto son interesantes para operadores exclusivamente móviles, porque consiguen flexibilizar los costes sustituyendo estaciones base macrocelulares por el despliegue de femtoceldas, pero necesitan desplegarse en zonas con una densidad de población relativamente elevada para que éstas descarguen tráfico de varios usuarios de la red macrocelular simultáneamente. No obstante, las femtoceldas son beneficiosas en todo caso si es el usuario quien asume los costes de la femtocelda y el backhaul, lo que sólo parece probable si se integran en el modelo de negocio de comercialización de nuevos servicios. Por tanto, el despliegue de femtoceldas en buena parte de la casuística estudiada sólo tiene sentido si consiguen aumentar los ingresos por usuario comercializando servicios de valor añadido que necesiten calidad de servicio garantizada y exploten a la vez de esa forma su principal ventaja competitiva respecto a la tecnología WiFi. Finalmente, en relación con el papel de la tecnología LTE para la provisión de servicios de acceso fijo inalámbrico para la banda ancha de 30 Mbps, se ha desarrollado un modelo TD-LTE y mediante la metodología de análisis tecnoeconómico se ha realizado un estudio prospectivo para el caso de España. Los resultados obtenidos preciden una huella de cobertura de FTTH del 74% para 2020, y demuestran que una red TD-LTE en la banda de 3,5 GHz resulta viable para aumentar la cobertura de servicios de 30 Mbps en 14 puntos porcentuales. Junto con la consideración de la cobertura de otras redes, la cobertura de 30 Mbps de acuerdo a la viabilidad de los despliegues alcanzaría el 95% en España en el año 2020. Como resumen, los resultados obtenidos muestran en todos los casos la capacidad de la tecnología LTE para afrontar nuevos desafíos en relación con el aumento del tráfico móvil, especialmente crítico en las zonas más urbanas, y el cierre de la brecha digital en el acceso a la banda ancha rápida en las zonas más rurales. ABSTRACT The LTE standard has been pointed out as one of the keys for telecom operators to address the demand growth in mobile traffic foreseen for the next years in a cost-efficient way, since its core network is more scalable and its radio interface more flexible than those of its predecessor technologies. On the other hand, regulators need to guarantee an adequate, equitable and non-discriminatory access to radio spectrum, which enable a favorable environment for the deployment of advanced mobile communication networks. Despite the reform of the spectrum regulatory framework in Europe, which allowed for the deployment of new technologies in the historic GSM bands, additional spectrum has been allocated to IMT systems in new frequency bands, what in turn has set out new challenges for technology and regulation. The current fragmentation of available spectrum in very different frequency bands has boosted the development of carrier aggregation techniques in most recent releases of the LTE standard, which permit a better exploitation of radio resources as a whole. Nonetheless, spectrum below 1 GHz is still scarce for mobile networks, since mobile traffic increases at a more rapid pace than spectral efficiency and spectrum resources. The 900 MHz frequency band is still being used for GSM services, what has worsen the dispute between mobile communication services and terrestrial broadcasting services for the upper part of the UHF band. Concretely, the 700 MHz frequency band has been pointed out as one of the next bands to be allocated to mobile in order to increase available spectrum. However, its release by current Digital Terrestrial Television networks is challenging in Member States where it constitutes the main free access audiovisual platform, opening up a new debate around the audiovisual model in the long term in Europe. On the other hand, public policies of the present decade to promote fast and ultrafast broadband access has established very ambitious objectives for the year 2020, both at European and national levels. Universalization of 30 Mbps broadband access networks constitutes one of the main challenges. Expectations raised by LTE technology and the allocation of new frequency bands has lead fixed wireless access (FWA) services to acquire special relevance in light of public policy objectives, which will not be met but with a compendium of different technologies, as different involved stakeholders have acknowledged. This PhD Dissertation develops techno-economic models to carry out a prospective analysis for three cases of special relevance in LTE networks’ deployment: the spectrum pricing of the 700 MHz frequency band, an assessment of new business models and cost reduction considering femtocell technologies, and the feasibility of LTE fixed wireless access networks to close the 30 Mbps broadband access gap in rural areas. In the first place and regarding the application of techno-economic analysis for 700 MHz spectrum pricing, obtained results reveal two core issues. First of all, the need to allocate more spectrum for operators in order to fulfill mobile traffic demand in the mid-term. Secondly, there is a substantial difference in deployment costs for a LTE network when there is sub-1GHz spectrum available and when there is not, but this difference decreases as additional sub-1GHz spectrum is added. Thus, the allocation of 700 MHz band to mobile communication services would cause a relevant reduction in deployment costs if the operator does not count on spectrum in the 800 MHz, but not if it already has been assigned spectrum in low frequencies for the deployment. In this regard, the price operators will be willing to pay for 700 MHz spectrum will depend on them having already spectrum in the 800 MHz frequency band or not. However, since competition for the new spectrum will not be so strong, expected incomes from 700 MHz spectrum awards will be generally lower than those from the digital dividend, despite this spectrum being as valuable as 800 MHz spectrum for some operators. In the second place, regarding femtocell deployment, some conclusions can be drawn in terms of deployment cost savings and also with reference to the business model they enable. Savings provided by a joint macro-femto LTE network as compared to an exclusively macrocellular deployment increase as the available bandwidth for the macrocells decreases. Therefore, for a convergent operator the deployment of femtocells can only have economic sense if the available bandwidth is scarce (around 2x10 MHz), which might be the case of fix market operators which are new entrant in mobile market. Besides, open access models are interesting for exclusively mobile operators, since they make costs more flexible by substituting macrocell base stations by femtocells, but they need to be deployed relatively densely populated areas so that they can offload traffic from several macrocell users simultaneously. Nonetheless, femtocells are beneficial in all cases if the user assumes both femtocell and backhaul costs, which only seems probable if they are integrated in a business model commercializing new services. Therefore, in many of the cases analyzed femtocell deployment only makes sense if they increase revenues per user through new added value services which need from guaranteed quality of service, thus exploiting its main competitive advantage compared to WiFi. Finally, regarding the role of LTE technology in the provision of fixed wireless access services for 30 Mbps broadband, a TD-LTE model has been developed and a prospective study has been carried out through techno-economic methodology for the Spanish case. Obtained results foresee a FTTH coverage footprint of 74% households for 2020, and prove that a TD-LTE network in the 3.5 GHz band results feasible to increase 30 Mbps service coverage in additional 14 percentage points. To sum up, obtained results show LTE technology capability to address new challenges regarding both mobile traffic growth, particularly critical in urban zones, and the current digital divide in fast broadband access in most rural zones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study was to trace the mortality profile of the elderly in Brazil using two neighboring age groups: 60 to 69 years (young-old) and 80 years or more (oldest-old). To do this, we sought to characterize the trend and distinctions of different mortality profiles, as well as the quality of the data and associations with socioeconomic and sanitary conditions in the micro-regions of Brazil. Data was collected from the Mortality Information System (SIM) and the Brazilian Institute of Geography and Statistics (IBGE). Based on these data, the coefficients of mortality were calculated for the chapters of the International Disease Classification (ICD-10). A polynomial regression model was used to ascertain the trend of the main chapters. Non-hierarchical cluster analysis (K-Means) was used to obtain the profiles for different Brazilian micro-regions. Factorial analysis of the contextual variables was used to obtain the socio-economic and sanitary deprivation indices (IPSS). The trend of the CMId and of the ratio of its values in the two age groups confirmed a decrease in most of the indicators, particularly for badly-defined causes among the oldest-old. Among the young-old, the following profiles emerged: the Development Profile; the Modernity Profile; the Epidemiological Paradox Profile and the Ignorance Profile. Among the oldest-old, the latter three profiles were confirmed, in addition to the Low Mortality Rates Profile. When comparing the mean IPSS values in global terms, all of the groups were different in both of the age groups. The Ignorance Profile was compared with the other profiles using orthogonal contrasts. This profile differed from all of the others in isolation and in clusters. However, the mean IPSS was similar for the Low Mortality Rates Profile among the oldest-old. Furthermore, associations were found between the data quality indicators, the CMId for badly-defined causes, the general coefficient of mortality for each age group (CGMId) and the IPSS of the micro-regions. The worst rates were recorded in areas with the greatest socioeconomic and sanitary deprivation. The findings of the present study show that, despite the decrease in the mortality coefficients, there are notable differences in the profiles related to contextual conditions, including regional differences in data quality. These differences increase the vulnerability of the age groups studied and the health iniquities that are already present.

Relevância:

70.00% 70.00%

Publicador:

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The aim of this PhD thesis is the study of the nuclear properties of radio loud AGN. Multiple and/or recent mergers in the host galaxy and/or the presence of cool core in galaxy clusters can play a role in the formation and evolution of the radio source. Being a unique class of objects (Lin & Mohr 2004), we focus on Brightest Cluster Galaxies (BCGs). We investigate their parsec scale radio emission with VLBI (Very Long Baseline Interferometer) observations. From literature or new data , we collect and analyse VLBA (Very Long Baseline) observations at 5 GHz of a complete sample of BCGs and ``normal'' radio galaxies (Bologna Complete Sample , BCS). Results on nuclear properties of BCGs are coming from the comparison with the results for the Bologna COmplete Sample (BCS). Our analysis finds a possible dichotomy between BCGs in cool-core clusters and those in non-cool-core clusters. Only one-sided BCGs have similar kinematic properties with FRIs. Furthermore, the dominance of two-sided jet structures only in cooling clusters suggests sub-relativistic jet velocities. The different jet properties can be related to a different jet origin or to the interaction with a different ISM. We larger discuss on possible explanation of this.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

La maggior parte dei corpi celesti che popolano l’universo emette “luce”. Ciò significa che essi sono visibili dai nostri occhi quando li alziamo sul cielo notturno o al limite, se troppo lontani, da potenti telescopi ottici. Questa luminosità ha nella maggior parte dei casi un’origine termonucleare, dovuta cioè alla presenza di sorgenti come le stelle, in cui l’elevata temperatura interna legata alle reazioni di fusione che le mantengono in vita produce una radiazione di corpo nero in banda ottica. Tuttavia, dato che la parte visibile costituisce solo una minuscola porzione dell’intero spettro elettromagnetico, andando ad indagare emissioni a differenti frequenze come il radio, l’infrarosso, l’ultravioletto, X e gamma, si rileva la presenza un’altra categoria di oggetti dalle caratteristiche peculiari che li rendono un affascinante campo di studio per molteplici ragioni: i Nuclei Galattici Attivi (AGN) (figura 1). Sono abbastanza rari (costituiscono meno dell’1% del totale rispetto alle normali galassie) e dalla vita breve, spesso molto lontani e potenti, ferventi di un’intensa attività che sembra crescere col redshift; si ipotizza perciò che siano giovani e che ci aprano una finestra sul momento successivo al collasso iniziale proprio della vita di ogni galassia, rivelandosi fondamentali per elaborare eventuali teorie cosmologiche. Inoltre, sebbene spesso ospiti di galassie visibili anche in ottico, i loro meccanismi di emissione e gli speciali comportamenti necessitano di analisi e spiegazioni totalmente differenti. Particolare è anche il metodo di rilevamento: per coprire infatti queste determinate frequenze è stata sviluppata una tecnica innovativa capace di dare ottimi risultati, perfino migliori di quelli dei telescopi tradizionali, l’interferometria radio. La tesi si divide in due parti: la prima delinea un ritratto degli AGN, la seconda analizza il flusso proveniente dalla radiogalassia 3C 84 a 15.4 e 43 GHz e ipotizza un possibile sito di origine dell’aumento di brillanza osservato.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent advancements in the area of organic polymer applications demand novel and advanced materials with desirable surface, optical and electrical properties to employ in emerging technologies. This study examines the fabrication and characterization of polymer thin films from non-synthetic Terpinen-4-ol monomer using radio frequency plasma polymerization. The optical properties, thickness and roughness of the thin films were studied in the wavelength range 200–1000 nm using ellipsometry. The polymer thin films of thickness from 100 nm to 1000 nm were fabricated and the films exhibited smooth and defect-free surfaces. At 500 nm wavelength, the refractive index and extinction coefficient were found to be 1.55 and 0.0007 respectively. The energy gap was estimated to be 2.67 eV, the value falling into the semiconducting Eg region. The obtained optical and surface properties of Terpinen-4-ol based films substantiate their candidacy as a promising low-cost material with potential applications in electronics, optics, and biomedical industries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.