939 resultados para State-based reasoning
Resumo:
ABSTRACT OBJECTIVE To describe the spatial patterns of leprosy in the Brazilian state of Tocantins. METHODS This study was based on morbidity data obtained from the Sistema de Informações de Agravos de Notificação (SINAN – Brazilian Notifiable Diseases Information System), of the Ministry of Health. All new leprosy cases in individuals residing in the state of Tocantins, between 2001 and 2012, were included. In addition to the description of general disease indicators, a descriptive spatial analysis, empirical Bayesian analysis and spatial dependence analysis were performed by means of global and local Moran’s indexes. RESULTS A total of 14,542 new cases were recorded during the period under study. Based on the annual case detection rate, 77.0% of the municipalities were classified as hyperendemic (> 40 cases/100,000 inhabitants). Regarding the annual case detection rate in < 15 years-olds, 65.4% of the municipalities were hyperendemic (10.0 to 19.9 cases/100,000 inhabitants); 26.6% had a detection rate of grade 2 disability cases between 5.0 and 9.9 cases/100,000 inhabitants. There was a geographical overlap of clusters of municipalities with high detection rates in hyperendemic areas. Clusters with high disease risk (global Moran’s index: 0.51; p < 0.001), ongoing transmission (0.47; p < 0.001) and late diagnosis (0.44; p < 0.001) were identified mainly in the central-north and southwestern regions of Tocantins. CONCLUSIONS We identified high-risk clusters for transmission and late diagnosis of leprosy in the Brazilian state of Tocantins. Surveillance and control measures should be prioritized in these high-risk municipalities.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Engineering Education includes not only teaching theoretical fundamental concepts but also its verification during practical lessons in laboratories. The usual strategies to carry out this action are frequently based on Problem Based Learning, starting from a given state and proceeding forward to a target state. The possibility or the effectiveness of this procedure depends on previous states and if the present state was caused or resulted from earlier ones. This often happens in engineering education when the achieved results do not match the desired ones, e.g. when programming code is being developed or when the cause of the wrong behavior of an electronic circuit is being identified. It is thus important to also prepare students to proceed in the reverse way, i.e. given a start state generate the explanation or even the principles that underlie it. Later on, this sort of skills will be important. For instance, to a doctor making a patient?s story or to an engineer discovering the source of a malfunction. This learning methodology presents pedagogical advantages besides the enhanced preparation of students to their future work. The work presented on his document describes an automation project developed by a group of students in an engineering polytechnic school laboratory. The main objective was to improve the performance of a Braille machine. However, in a scenario of Reverse Problem-Based learning, students had first to discover and characterize the entire machine's function before being allowed (and being able) to propose a solution for the existing problem.
Resumo:
In this paper we present results about the functioning of a multilayered a-SiC:H heterostructure as a device for wavelength-division demultiplexing of optical signals. The device is composed of two stacked p-i-n photodiodes, both optimized for the selective collection of photogenerated carriers. Band gap engineering was used to adjust the photogeneration and recombination rates profiles of the intrinsic absorber regions of each photodiode to short and long wavelength absorption and carrier collection in the visible spectrum. The photocurrent signal using different input optical channels was analyzed at reverse and forward bias and under steady state illumination. This photocurrent is used as an input for a demux algorithm based on the voltage controlled sensitivity of the device. The device functioning is explained with results obtained by numerical simulation of the device, which permit an insight to the internal electric configuration of the double heterojunction.These results address the explanation of the device functioning in the frequency domain to a wavelength tunable photocapacitance due to the accumulation of space charge localized at the internal junction. The existence of a direct relation between the experimentally observed capacitive effects of the double diode and the quality of the semiconductor materials used to form the internal junction is highlighted.
Resumo:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
Resumo:
This study is primarily focused in establishing the solid-state sensory abilities of several luminescent polymeric calix[4]arene-based materials toward selected nitroaromatic compounds (NACs), creating the foundations for their future application as high performance materials for detection of high explosives. The phenylene ethynylene-type polymers possessing bis-calix[4]arene scaffolds in their core were designed to take advantage of the known recognition abilities of calixarene compounds toward neutral guests, particularly in solid-state, therefore providing enhanced sensitivity and selectivity in the sensing of a given analyte. It was found that all the calix[4]arene-poly(para-phenylene ethynylene)s here reported displayed high sensitivities toward the detection of nitrobenzene, 2,4-dinitrotoluene and 2,4,6-trinitrotoluene (TNT). Particularly effective and significant was the response of the films (25-60 nm of thickness) upon exposure to TNT vapor (10 ppb): over 50% of fluorescence quenching was achieved in only 10 s. In contrast, a model polymer lacking the calixarene units showed only reduced quenching activity for the same set of analytes, clearly highlighting the relevance of the macrocyclics in promoting the signaling of the transduction event. The films exhibited high photostability (less than 0.5% loss of fluorescence intensity up to 15 min of continuous irradiation) and the fluorescence quenching sensitivity could be fully recovered after exposure of the quenched films to saturated vapors of hydrazine (the initial fluorescence intensities were usually recovered within 2-5 min of exposure to hydrazine).
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente, perfil Gestão de Sistemas Ambientais
Resumo:
XML Schema is one of the most used specifications for defining types of XML documents. It provides an extensive set of primitive data types, ways to extend and reuse definitions and an XML syntax that simplifies automatic manipulation. However, many features that make XML Schema Definitions (XSD) so interesting also make them rather cumbersome to read. Several tools to visualize and browse schema definitions have been proposed to cope with this issue. The novel approach proposed in this paper is to base XSD visualization and navigation on the XML document itself, using solely the web browser, without requiring a pre-processing step or an intermediate representation. We present the design and implementation of a web-based XML Schema browser called schem@Doc that operates over the XSD file itself. With this approach, XSD visualization is synchronized with the source file and always reflects its current state. This tool fits well in the schema development process and is easy to integrate in web repositories containing large numbers of XSD files.
Resumo:
A cross: sectional survey on schistosomiasis was done in Comercinho (Minas Gerais State, Brazil), a town with 1474 inhabitants. Stool (Kato-Katz method) and physical examinations were done on 90% of the population and on 84% of the individuals over 2 years of age, respectively. The ecological and individual (case-control) analysis were used to investigate the relation between splenomegaly and S. mansoni egg counts in different age groups. In the ecological analysis there was a clearly correspondence between higher geometric mean of eggs and higher percentage of splenomegaly in the age groups 5-9 and 10-12 years. In the individual analysis it was found that only in the youngest individuals (5-8 or 5-9 years old) the splenomegaly was related with higher mean egg counts in the feces, having been a tendency to the decrease of excretion of eggs in patients with splenomegaly as the age increased. These results strongly suggest that the ecological data are' better indicator of the severity of schistosomiasis in endemic areas, as the decrease of the egg excretion in patients with splenomegaly may be a confounding variable for the individual analysis.
Resumo:
Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).
Resumo:
In this brief, a read-only-memoryless structure for binary-to-residue number system (RNS) conversion modulo {2(n) +/- k} is proposed. This structure is based only on adders and constant multipliers. This brief is motivated by the existing {2(n) +/- k} binary-to-RNS converters, which are particular inefficient for larger values of n. The experimental results obtained for 4n and 8n bits of dynamic range suggest that the proposed conversion structures are able to significantly improve the forward conversion efficiency, with an AT metric improvement above 100%, regarding the related state of the art. Delay improvements of 2.17 times with only 5% area increase can be achieved if a proper selection of the {2(n) +/- k} moduli is performed.
Resumo:
In this paper, a damage-detection approach using the Mahalanobis distance with structural forced dynamic response data, in the form of transmissibility, is proposed. Transmissibility, as a damage-sensitive feature, varies in accordance with the damage level. Besides, Mahalanobis distance can distinguish the damaged structural state condition from the undamaged one by condensing the baseline data. For comparison reasons, the Mahalanobis distance results using transmissibility are compared with those using frequency response functions. The experiment results reveal quite a significant capacity for damage detection, and the comparison between the use of transmissibility and frequency response functions shows that, in both cases, the different damage scenarios could be well detected. Copyright (c) 2015 John Wiley & Sons, Ltd.
Resumo:
A bi-enzymatic biosensor (LACC–TYR–AuNPs–CS/GPE) for carbamates was prepared in a single step by electrodeposition of a hybrid film onto a graphene doped carbon paste electrode (GPE). Graphene and the gold nanoparticles (AuNPs) were morphologically characterized by transmission electron microscopy, X-ray photoelectron spectroscopy, dynamic light scattering and laser Doppler velocimetry. The electrodeposited hybrid film was composed of laccase (LACC), tyrosinase (TYR) and AuNPs entrapped in a chitosan (CS) polymeric matrix. Experimental parameters, namely graphene redox state, AuNPs:CS ratio, enzymes concentration, pH and inhibition time were evaluated. LACC–TYR–AuNPs–CS/GPE exhibited an improved Michaelis–Menten kinetic constant (26.9 ± 0.5 M) when compared with LACC–AuNPs–CS/GPE (37.8 ± 0.2 M) and TYR–AuNPs–CS/GPE (52.3 ± 0.4 M). Using 4-aminophenol as substrate at pH 5.5, the device presented wide linear ranges, low detection limits (1.68×10− 9 ± 1.18×10− 10 – 2.15×10− 7 ± 3.41×10− 9 M), high accuracy, sensitivity (1.13×106 ± 8.11×104 – 2.19×108 ± 2.51×107 %inhibition M− 1), repeatability (1.2–5.8% RSD), reproducibility (3.2–6.5% RSD) and stability (ca. twenty days) to determine carbaryl, formetanate hydrochloride, propoxur and ziram in citrus fruits based on their inhibitory capacity on the polyphenoloxidases activity. Recoveries at two fortified levels ranged from 93.8 ± 0.3% (lemon) to 97.8 ± 0.3% (orange). Glucose, citric acid and ascorbic acid do not interfere significantly in the electroanalysis. The proposed electroanalytical procedure can be a promising tool for food safety control.
Resumo:
The preliminary results from a bipolar industrial solidstate based Marx generator, developed for the food industry, capable of delivering 25 kV/250 A positive and negative pulses with 12 kW average power, are presented and discussed. This modular topology uses only four controlled switches per cell, 27 cells in total that can be charged up to 1000V each, the two extra cells are used for droop compensation. The triggering signals for all the switches are generated by a FPGA. Considering that biomaterials are similar to resistive type loads, experimental results from this new bipolar 25 kV modulator into resistive loads are presented and discussed.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.