858 resultados para instruments and electronics
Resumo:
Adopting standard-based weblab infrastructures can be an added value for spreading their influence and acceptance in education. This paper suggests a solution based on the IEEE1451.0 Std. and FPGA technology for creating reconfigurable weblab infrastructures using Instruments and Modules (I&Ms) described through standard Hardware Description Language (HDL) files. It describes a methodology for creating and binding I&Ms into an IEEE1451-module embedded in a FPGA-based board able to be remotely controlled/accessed using IEEE1451-HTTP commands. At the end, an example of a step-motor controller module bond to that IEEE1451-module is described.
Resumo:
This paper presents the design and implementation of direct power controllers for three-phase matrix converters (MC) operating as Unified Power Flow Controllers (UPFC). Theoretical principles of the decoupled linear power controllers of the MC-UPFC to minimize the cross-coupling between active and reactive power control are established. From the matrix converter based UPFC model with a modified Venturini high frequency PWM modulator, decoupled controllers for the transmission line active (P) and reactive (Q) power direct control are synthesized. Simulation results, obtained from Matlab/Simulink, are presented in order to confirm the proposed approach. Results obtained show decoupled power control, zero error tracking, and fast responses with no overshoot and no steady-state error.
Resumo:
Infrared spectroscopy, either in the near and mid (NIR/MIR) region of the spectra, has gained great acceptance in the industry for bioprocess monitoring according to Process Analytical Technology, due to its rapid, economic, high sensitivity mode of application and versatility. Due to the relevance of cyprosin (mostly for dairy industry), and as NIR and MIR spectroscopy presents specific characteristics that ultimately may complement each other, in the present work these techniques were compared to monitor and characterize by in situ and by at-line high-throughput analysis, respectively, recombinant cyprosin production by Saccharomyces cerevisiae. Partial least-square regression models, relating NIR and MIR-spectral features with biomass, cyprosin activity, specific activity, glucose, galactose, ethanol and acetate concentration were developed, all presenting, in general, high regression coefficients and low prediction errors. In the case of biomass and glucose slight better models were achieved by in situ NIR spectroscopic analysis, while for cyprosin activity and specific activity slight better models were achieved by at-line MIR spectroscopic analysis. Therefore both techniques enabled to monitor the highly dynamic cyprosin production bioprocess, promoting by this way more efficient platforms for the bioprocess optimization and control.
Resumo:
It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians’ exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.
Resumo:
The objective was to validate Regulatory Sensory Processing Disorders’ criteria (DC:0-3R, 2005) using empirical data on the presence and severity of sensory modulation deficits and specific psychiatric symptoms in clinical samples. Sixty toddlers who attended a child mental health unit were diagnosed by a clinical team. The following two groups were created: toddlers with RSPD(N = 14) and those with ‘‘other diagnoses in Axis I/II of the DC:0-3R00(OD3R) (N = 46). Independently of the clinical process, parents completed the Infant Toddler Sensory Profile (as a checklist for sensory symptoms) and the Achenbach Behavior Checklist for ages 1/2–5 (CBCL 1/2–5). The scores from the two groups were compared. The results showed the following for the RSPD group: a higher number of affected sensory areas and patterns than in the OD3R group; a higher percentage of sensory deficits in specific sensory categories; and a higher severity of behavioral symptoms such as withdrawal, inattention, other externalizing problems and pervasive developmental problems in CBCL 1/2–5. The results confirmed our hypotheses by indicating a higher severity of sensory symptoms and identifying specific behavioral problems in children with RSPD. The results revealed convergent validity between the instruments and the diagnostic criteria for RSPD and supported the validity of RSPD as a unique diagnosis. The findings also suggested the importance of identifying sensory modulation deficits in order to develop an early intervention to enhance the sensory capacities of children who do not fully satisfy the criteria for some DSM-IV-TR disorders.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)
Resumo:
This paper proposes and validates a model-driven software engineering technique for spreadsheets. The technique that we envision builds on the embedding of spreadsheet models under a widely used spreadsheet system. This means that we enable the creation and evolution of spreadsheet models under a spreadsheet system. More precisely, we embed ClassSheets, a visual language with a syntax similar to the one offered by common spreadsheets, that was created with the aim of specifying spreadsheets. Our embedding allows models and their conforming instances to be developed under the same environment. In practice, this convenient environment enhances evolution steps at the model level while the corresponding instance is automatically co-evolved.Finally,wehave designed and conducted an empirical study with human users in order to assess our technique in production environments. The results of this study are promising and suggest that productivity gains are realizable under our model-driven spreadsheet development setting.
Resumo:
We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space.
Resumo:
To compare autofluorescence (AF) images obtained with the confocal scanning laser ophthalmoscope (using the Heidelberg retina angiograph; HRA) and the modified Topcon fundus camera, in a routine clinical setting. A prospective comparative study conducted at the Jules-Gonin Eye Hospital. Fifty-six patients from the medical retina clinic. All patients had complete ophthalmic slit-lamp and fundus examinations, colour and red-free fundus photography, AF imaging with both instruments, and fluorescein angiography. Cataract and fixation were graded clinically. AF patterns were analyzed for healthy and pathological features. Differences of image noise were analyzed by cataract grading and fixation. A total of 105 eyes were included. AF patterns discovered by the retina angiograph and the fundus camera images, respectively, were a dark optic disc in 72 % versus 15 %, a dark fovea in 92 % versus 4 %, sub- and intraretinal fluid visible as hyperautofluorescence on HRA images only, lipid exudates visible as hypoautofluorescence on HRA images only. The same autofluorescent pattern was found on both images for geographic atrophy, retinal pigment changes, drusen and haemorrhage. Image noise was significantly associated with the degree of cataract and/or poor fixation, favouring the fundus camera. Images acquired by the fundus camera before and after fluorescein angiography were identical. Fundus AF images differ according to the technical differences of the instruments used. Knowledge of these differences is important not only for correctly interpreting images, but also for selecting the most appropriate instrument for the clinical situation.
Resumo:
A process analysis was conducted in a community - based treatment programme for alcohol abuse. The aims of the study were: to evaluate assessment instruments and measures; to measure change following treatment; to monitor gender differences; to assess the importance of early and current relationships; and to evaluate the effects of therapists. Subjects (n=145, males 83/females 62) completed a semi-structured interview schedule, Severity of Alcohol Dependency Questionnaire (SADQ), Short Alcohol Dependence Data Questionnaire (SADD); General Health Questionnaire (GHQ 12), and Alcohol Problems Questionnaire (APQ). A further three non-standardised self-rated measures were devised by the author. Included was the opportunity to obtain qualitative data. Follow up data was collected at 3, 9 and 15 months following first assessment. The SADD, APQ and consumption measures using detailed drink diaries proved the most relevant assessment measures. Following treatment, there was significant reduction in clients' dependency levels at 3 months, maintained through 9 and 15 months. Key client-rated changes were progress in reducing consumption and alcohol problems leading to a better quality of life and health. Qualitative data augmented these quantitative results. Psychological and acquired cognitive behavioural skills emerged as the main reasons for positive change and the treatment programme was found to have played a significant role in their acquisition. It appears that addressing marital problems can lead to a reduction in alcohol dependency levels. Gender analysis showed that males and females were similar in demographic characteristics, alcohol history details and dependence levels. It was concluded that the differences found did not necessitate different treatment programmes for women. Early family relationships were more problematic for females. Therapist performance varied and that variance was reflected in their clients' outcomes.This resource was contributed by The National Documentation Centre on Drug Use.
Resumo:
INTRODUCTION: Numerous instruments have been developed to assess spirituality and measure its association with health outcomes. This study's aims were to identify instruments used in clinical research that measure spirituality; to propose a classification of these instruments; and to identify those instruments that could provide information on the need for spiritual intervention. METHODS: A systematic literature search in MEDLINE, CINHAL, PsycINFO, ATLA, and EMBASE databases, using the terms "spirituality" and "adult$," and limited to journal articles was performed to identify clinical studies that used a spiritual assessment instrument. For each instrument identified, measured constructs, intended goals, and data on psychometric properties were retrieved. A conceptual and a functional classification of instruments were developed. RESULTS: Thirty-five instruments were retrieved and classified into measures of general spirituality (N = 22), spiritual well-being (N = 5), spiritual coping (N = 4), and spiritual needs (N = 4) according to the conceptual classification. Instruments most frequently used in clinical research were the FACIT-Sp and the Spiritual Well-Being Scale. Data on psychometric properties were mostly limited to content validity and inter-item reliability. According to the functional classification, 16 instruments were identified that included at least one item measuring a current spiritual state, but only three of those appeared suitable to address the need for spiritual intervention. CONCLUSIONS: Instruments identified in this systematic review assess multiple dimensions of spirituality, and the proposed classifications should help clinical researchers interested in investigating the complex relationship between spirituality and health. Findings underscore the scarcity of instruments specifically designed to measure a patient's current spiritual state. Moreover, the relatively limited data available on psychometric properties of these instruments highlight the need for additional research to determine whether they are suitable in identifying the need for spiritual interventions.
Resumo:
The paper discusses maintenance challenges of organisations with a huge number of devices and proposes the use of probabilistic models to assist monitoring and maintenance planning. The proposal assumes connectivity of instruments to report relevant features for monitoring. Also, the existence of enough historical registers with diagnosed breakdowns is required to make probabilistic models reliable and useful for predictive maintenance strategies based on them. Regular Markov models based on estimated failure and repair rates are proposed to calculate the availability of the instruments and Dynamic Bayesian Networks are proposed to model cause-effect relationships to trigger predictive maintenance services based on the influence between observed features and previously documented diagnostics
Resumo:
Background: Choosing an adequate measurement instrument depends on the proposed use of the instrument, the concept to be measured, the measurement properties (e.g. internal consistency, reproducibility, content and construct validity, responsiveness, and interpretability), the requirements, the burden for subjects, and costs of the available instruments. As far as measurement properties are concerned, there are no sufficiently specific standards for the evaluation of measurement properties of instruments to measure health status, and also no explicit criteria for what constitutes good measurement properties. In this paper we describe the protocol for the COSMIN study, the objective of which is to develop a checklist that contains COnsensus-based Standards for the selection of health Measurement INstruments, including explicit criteria for satisfying these standards. We will focus on evaluative health related patient-reported outcomes (HR-PROs), i.e. patient-reported health measurement instruments used in a longitudinal design as an outcome measure, excluding health care related PROs, such as satisfaction with care or adherence. The COSMIN standards will be made available in the form of an easily applicable checklist.Method: An international Delphi study will be performed to reach consensus on which and how measurement properties should be assessed, and on criteria for good measurement properties. Two sources of input will be used for the Delphi study: (1) a systematic review of properties, standards and criteria of measurement properties found in systematic reviews of measurement instruments, and (2) an additional literature search of methodological articles presenting a comprehensive checklist of standards and criteria. The Delphi study will consist of four (written) Delphi rounds, with approximately 30 expert panel members with different backgrounds in clinical medicine, biostatistics, psychology, and epidemiology. The final checklist will subsequently be field-tested by assessing the inter-rater reproducibility of the checklist.Discussion: Since the study will mainly be anonymous, problems that are commonly encountered in face-to-face group meetings, such as the dominance of certain persons in the communication process, will be avoided. By performing a Delphi study and involving many experts, the likelihood that the checklist will have sufficient credibility to be accepted and implemented will increase.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
In the context of fading channels it is well established that, with a constrained transmit power, the bit rates achievable by signals that are not peaky vanish as the bandwidth grows without bound. Stepping back from the limit, we characterize the highest bit rate achievable by such non-peaky signals and the approximate bandwidth where that apex occurs. As it turns out, the gap between the highest rate achievable without peakedness and the infinite-bandwidth capacity (with unconstrained peakedness) is small for virtually all settings of interest to wireless communications. Thus, although strictly achieving capacity in wideband fading channels does require signal peakedness, bit rates not far from capacity can be achieved with conventional signaling formats that do not exhibit the serious practical drawbacks associated with peakedness. In addition, we show that the asymptotic decay of bit rate in the absence of peakedness usually takes hold at bandwidths so large that wideband fading models are called into question. Rather, ultrawideband models ought to be used.