855 resultados para synchroton-based techniques
Resumo:
Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Due to the free nature of Wikipedia and allowing open access to everyone to edit articles the quality of articles may be affected. As all people don’t have equal level of knowledge and also different people have different opinions about a topic so there may be difference between the contributions made by different authors. To overcome this situation it is very important to classify the articles so that the articles of good quality can be separated from the poor quality articles and should be removed from the database. The aim of this study is to classify the articles of Wikipedia into two classes class 0 (poor quality) and class 1(good quality) using the Adaptive Neuro Fuzzy Inference System (ANFIS) and data mining techniques. Two ANFIS are built using the Fuzzy Logic Toolbox [1] available in Matlab. The first ANFIS is based on the rules obtained from J48 classifier in WEKA while the other one was built by using the expert’s knowledge. The data used for this research work contains 226 article’s records taken from the German version of Wikipedia. The dataset consists of 19 inputs and one output. The data was preprocessed to remove any similar attributes. The input variables are related to the editors, contributors, length of articles and the lifecycle of articles. In the end analysis of different methods implemented in this research is made to analyze the performance of each classification method used.
Resumo:
Objective: To investigate whether advanced visualizations of spirography-based objective measures are useful in differentiating drug-related motor dysfunctions between Off and dyskinesia in Parkinson’s disease (PD). Background: During the course of a 3 year longitudinal clinical study, in total 65 patients (43 males and 22 females with mean age of 65) with advanced PD and 10 healthy elderly (HE) subjects (5 males and 5 females with mean age of 61) were assessed. Both patients and HE subjects performed repeated and time-stamped assessments of their objective health indicators using a test battery implemented on a telemetry touch screen handheld computer, in their home environment settings. Among other tasks, the subjects were asked to trace a pre-drawn Archimedes spiral using the dominant hand and repeat the test three times per test occasion. Methods: A web-based framework was developed to enable a visual exploration of relevant spirography-based kinematic features by clinicians so they can in turn evaluate the motor states of the patients i.e. Off and dyskinesia. The system uses different visualization techniques such as time series plots, animation, and interaction and organizes them into different views to aid clinicians in measuring spatial and time-dependent irregularities that could be associated with the motor states. Along with the animation view, the system displays two time series plots for representing drawing speed (blue line) and displacement from ideal trajectory (orange line). The views are coordinated and linked i.e. user interactions in one of the views will be reflected in other views. For instance, when the user points in one of the pixels in the spiral view, the circle size of the underlying pixel increases and a vertical line appears in the time series views to depict the corresponding position. In addition, in order to enable clinicians to observe erratic movements more clearly and thus improve the detection of irregularities, the system displays a color-map which gives an idea of the longevity of the spirography task. Figure 2 shows single randomly selected spirals drawn by a: A) patient who experienced dyskinesias, B) HE subject, and C) patient in Off state. Results: According to a domain expert (DN), the spirals drawn in the Off and dyskinesia motor states are characterized by different spatial and time features. For instance, the spiral shown in Fig. 2A was drawn by a patient who showed symptoms of dyskinesia; the drawing speed was relatively high (cf. blue-colored time series plot and the short timestamp scale in the x axis) and the spatial displacement was high (cf. orange-colored time series plot) associated with smooth deviations as a result of uncontrollable movements. The patient also exhibited low amount of hesitation which could be reflected both in the animation of the spiral as well as time series plots. In contrast, the patient who was in the Off state exhibited different kinematic features, as shown in Fig. 2C. In the case of spirals drawn by a HE subject, there was a great precision during the drawing process as well as unchanging levels of time-dependent features over the test trial, as seen in Fig. 2B. Conclusions: Visualizing spirography-based objective measures enables identification of trends and patterns of drug-related motor dysfunctions at the patient’s individual level. Dynamic access of visualized motor tests may be useful during the evaluation of drug-related complications such as under- and over-medications, providing decision support to clinicians during evaluation of treatment effects as well as improve the quality of life of patients and their caregivers. In future, we plan to evaluate the proposed approach by assessing within- and between-clinician variability in ratings in order to determine its actual usefulness and then use these ratings as target outcomes in supervised machine learning, similarly as it was previously done in the study performed by Memedi et al. (2013).
Resumo:
This dissertation is focused on theoretical and experimental studies of optical properties of materials and multilayer structures composing liquid crystal displays (LCDs) and electrochromic (EC) devices. By applying spectroscopic ellipsometry, we have determined the optical constants of thin films of electrochromic tungsten oxide (WOx) and nickel oxide (NiOy), the films’ thickness and roughness. These films, which were obtained at spattering conditions possess high transmittance that is important for achieving good visibility and high contrast in an EC device. Another application of the general spectroscopic ellipsometry relates to the study of a photo-alignment layer of a mixture of azo-dyes SD-1 and SDA-2. We have found the optical constants of this mixture before and after illuminating it by polarized UV light. The results obtained confirm the diffusion model to explain the formation of the photo-induced order in azo-dye films. We have developed new techniques for fast characterization of twisted nematic LC cells in transmissive and reflective modes. Our techniques are based on the characteristics functions that we have introduced for determination of parameters of non-uniform birefringent media. These characteristic functions are found by simple procedures and can be utilised for simultaneous determination of retardation, its wavelength dispersion, and twist angle, as well as for solving associated optimization problems. Cholesteric LCD that possesses some unique properties, such as bistability and good selective scattering, however, has a disadvantage – relatively high driving voltage (tens of volts). The way we propose to reduce the driving voltage consists of applying a stack of thin (~1µm) LC layers. We have studied the ability of a layer of a surface stabilized ferroelectric liquid crystal coupled with several retardation plates for birefringent color generation. We have demonstrated that in order to accomplish good color characteristics and high brightness of the display, one or two retardation plates are sufficient.
Predictive models for chronic renal disease using decision trees, naïve bayes and case-based methods
Resumo:
Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
Resumo:
Of the ways in which agent behaviour can be regulated in a multiagent system, electronic contracting – based on explicit representation of different parties' responsibilities, and the agreement of all parties to them – has significant potential for modern industrial applications. Based on this assumption, the CONTRACT project aims to develop and apply electronic contracting and contract-based monitoring and verification techniques in real world applications. This paper presents results from the initial phase of the project, which focused on requirements solicitation and analysis. Specifically, we survey four use cases from diverse industrial applications, examine how they can benefit from an agent-based electronic contracting infrastructure and outline the technical requirements that would be placed on such an infrastructure. We present the designed CONTRACT architecture and describe how it may fulfil these requirements. In addition to motivating our work on the contract-based infrastructure, the paper aims to provide a much needed community resource in terms of use case themselves and to provide a clear commercial context for the development of work on contract-based system.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
Distributed energy and water balance models require time-series surfaces of the meteorological variables involved in hydrological processes. Most of the hydrological GIS-based models apply simple interpolation techniques to extrapolate the point scale values registered at weather stations at a watershed scale. In mountainous areas, where the monitoring network ineffectively covers the complex terrain heterogeneity, simple geostatistical methods for spatial interpolation are not always representative enough, and algorithms that explicitly or implicitly account for the features creating strong local gradients in the meteorological variables must be applied. Originally developed as a meteorological pre-processing tool for a complete hydrological model (WiMMed), MeteoMap has become an independent software. The individual interpolation algorithms used to approximate the spatial distribution of each meteorological variable were carefully selected taking into account both, the specific variable being mapped, and the common lack of input data from Mediterranean mountainous areas. They include corrections with height for both rainfall and temperature (Herrero et al., 2007), and topographic corrections for solar radiation (Aguilar et al., 2010). MeteoMap is a GIS-based freeware upon registration. Input data include weather station records and topographic data and the output consists of tables and maps of the meteorological variables at hourly, daily, predefined rainfall event duration or annual scales. It offers its own pre and post-processing tools, including video outlook, map printing and the possibility of exporting the maps to images or ASCII ArcGIS formats. This study presents the friendly user interface of the software and shows some case studies with applications to hydrological modeling.
Resumo:
As técnicas tradicionais de avaliação de rentabilidade apresentam características que resultam eficazes no que tange aos aspectos econômicos da análise de investimentos. No entanto, a validade das informações fornecidas por estes métodos depende dos dados incluídos na avaliação. Neste sentido, em função da complexidade e do inter-relacionamento existente nos processos produtivos de empresas, as alterações proporcionadas por um investimento podem ter impacto sobre áreas que não estão diretamente envolvidas com o projeto a ser implementado. Este fato dificulta a identificação e conseqüente inclusão da totalidade dos fatores que causam impacto na análise do projeto. Além disso, impactos relacionados a atividades indiretas não possuem uma metodologia que permita sua quantificação. Como forma de abordar o problema, este trabalho apresenta uma sistemática de avaliação de investimentos que, através de uma seqüência estruturada de passos e com a utilização das informações geradas por um sistema de custeio do tipo ABC (Activity-Based Costing), possibilita incluir na análise impactos indiretos gerados pelo projeto. A aplicação desta sistemática em um projeto de substituição de equipamento numa empresa do ramo industrial mostra que as informações geradas complementam aquelas obtidas quando da aplicação das técnicas tradicionais. Desta forma, foi possível avaliar o impacto econômico provocado por possíveis alterações nos setores não diretamente ligados ao projeto.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
The number of research papers available today is growing at a staggering rate, generating a huge amount of information that people cannot keep up with. According to a tendency indicated by the United States’ National Science Foundation, more than 10 million new papers will be published in the next 20 years. Because most of these papers will be available on the Web, this research focus on exploring issues on recommending research papers to users, in order to directly lead users to papers of their interest. Recommender systems are used to recommend items to users among a huge stream of available items, according to users’ interests. This research focuses on the two most prevalent techniques to date, namely Content-Based Filtering and Collaborative Filtering. The first explores the text of the paper itself, recommending items similar in content to the ones the user has rated in the past. The second explores the citation web existing among papers. As these two techniques have complementary advantages, we explored hybrid approaches to recommending research papers. We created standalone and hybrid versions of algorithms and evaluated them through both offline experiments on a database of 102,295 papers, and an online experiment with 110 users. Our results show that the two techniques can be successfully combined to recommend papers. The coverage is also increased at the level of 100% in the hybrid algorithms. In addition, we found that different algorithms are more suitable for recommending different kinds of papers. Finally, we verified that users’ research experience influences the way users perceive recommendations. In parallel, we found that there are no significant differences in recommending papers for users from different countries. However, our results showed that users’ interacting with a research paper Recommender Systems are much happier when the interface is presented in the user’s native language, regardless the language that the papers are written. Therefore, an interface should be tailored to the user’s mother language.
Resumo:
With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.
Resumo:
This paper investigates whether there is evidence of structural change in the Brazilian term structure of interest rates. Multivariate cointegration techniques are used to verify this evidence. Two econometrics models are estimated. The rst one is a Vector Autoregressive Model with Error Correction Mechanism (VECM) with smooth transition in the deterministic coe¢ cients (Ripatti and Saikkonen [25]). The second one is a VECM with abrupt structural change formulated by Hansen [13]. Two datasets were analysed. The rst one contains a nominal interest rate with maturity up to three years. The second data set focuses on maturity up to one year. The rst data set focuses on a sample period from 1995 to 2010 and the second from 1998 to 2010. The frequency is monthly. The estimated models suggest the existence of structural change in the Brazilian term structure. It was possible to document the existence of multiple regimes using both techniques for both databases. The risk premium for di¤erent spreads varied considerably during the earliest period of both samples and seemed to converge to stable and lower values at the end of the sample period. Long-term risk premiums seemed to converge to inter-national standards, although the Brazilian term structure is still subject to liquidity problems for longer maturities.