682 resultados para timed automata
Resumo:
Abstract Background The purpose of the present study was to compare dynamic muscle strength, functional performance, fatigue, and quality of life in premenopausal systemic lupus erythematosus (SLE) patients with low disease activity versus matched-healthy controls and to determine the association of dynamic muscle strength with fatigue, functional performance, and quality of life in SLE patients. Methods We evaluated premenopausal (18–45 years) SLE patients with low disease activity (Systemic lupus erythematosus disease activity index [SLEDAI]: mean 1.5 ± 1.2). The control (n = 25) and patient (n = 25) groups were matched by age, physical characteristics, and the level of physical activities in daily life (International Physical Activity Questionnaire IPAQ). Both groups had not participated in regular exercise programs for at least six months prior to the study. Dynamic muscle strength was assessed by one-repetition maximum (1-RM) tests. Functional performance was assessed by the Timed Up and Go (TUG), in 30-s test a chair stand and arm curl using a 2-kg dumbbell and balance test, handgrip strength and a sit-and-reach flexibility test. Quality of life (SF-36) and fatigue were also measured. Results The SLE patients showed significantly lower dynamic muscle strength in all exercises (leg press 25.63%, leg extension 11.19%, leg curl 15.71%, chest press 18.33%, lat pulldown 13.56%, 1-RM total load 18.12%, P < 0.001-0.02) compared to the controls. The SLE patients also had lower functional performance, greater fatigue and poorer quality of life. In addition, fatigue, SF-36 and functional performance accounted for 52% of the variance in dynamic muscle strength in the SLE patients. Conclusions Premenopausal SLE patients with low disease activity showed lower dynamic muscle strength, along with increased fatigue, reduced functional performance, and poorer quality of life when compared to matched controls.
Falhas de mercado e redes em políticas públicas: desafios e possibilidades ao Sistema Único de Saúde
Resumo:
Os princípios e as diretrizes do Sistema Único de Saúde (SUS) impõem uma estrutura de assistência baseada em redes de políticas públicas que, combinada ao modelo de financiamento adotado, conduz a falhas de mercado. Isso impõe barreiras à gestão do sistema público de saúde e à concretização dos objetivos do SUS. As características institucionais e a heterogeneidade dos atores, aliadas à existência de diferentes redes de atenção à saúde, geram complexidade analítica no estudo da dinâmica global da rede do SUS. Há limitações ao emprego de métodos quantitativos baseados em análise estática com dados retrospectivos do sistema público de saúde. Assim, propõe-se a abordagem do SUS como sistema complexo, a partir da utilização de metodologia quantitativa inovadora baseada em simulação computacional. O presente artigo buscou analisar desafios e potencialidades na utilização de modelagem com autômatos celulares combinada com modelagem baseada em agentes para simulação da evolução da rede de serviços do SUS. Tal abordagem deve permitir melhor compreensão da organização, heterogeneidade e dinâmica estrutural da rede de serviços do SUS e possibilitar minimização dos efeitos das falhas de mercado no sistema de saúde brasileiro.
Resumo:
The objective of this study was to investigate the effects of eCG and temporary calf removal (TCR) associated with progesterone (P4) treatment on the dynamics of follicular growth, CL size, and P4 concentrations in cyclic (n ¼ 36) and anestrous (n ¼ 30) Nelore cows. Cyclic (C) and anestrous (A) cows were divided into three groups. The control group received 2 mg of estradiol benzoate via intramuscular (IM) injection and an intravaginal device containing 1.9 g of P4 on Day 0. On Day 8, the device was removed, and the animals received 12.5 mg of dinoprost tromethamine IM. After 24 hours, the animals received 1 mg of estradiol benzoate IM. In the eCG group, cows received the same treatment described for the control group but also received 400 UI of eCG at the time of device removal. In the TCR group, calves were separated from the cows for 56 hours after device removal. Ultrasound exams were performed every 24 hours after device removal until the time of ovulation and 12 days after ovulation to measure the size of the CL. On the same day as the CL measurement, blood was collected to determine the plasma P4 level. Statistical analyses were performed with a significance level of P ≤ 0.05. In cyclic cows, the presence of the CL at the beginning of protocol resulted in a smaller follicle diameter at the time of device removal (7.4 ± 0.3 mm in cows with CL vs. 8.9 ± 0.4 mm in cows without CL; P ¼ 0.03). All cows ovulated within 72 hours after device removal. Anestrous cows treated with eCG or TCR showed follicle diameter at fixed-timed artificial insemination (A-eCG 10.2 ± 0.3 and A-TCR 10.3 ± 0.5 mm) and follicular growth rate (A-eCG 1.5 ± 0.2 and A-TCR 1.3 ± 0.1 mm/day) similar to cyclic cows (C-eCG 11.0 ± 0.6 and C-TCR 12.0 ± 0.5 mm) and (C-eCG 1.4 ± 0.2 and C-TCR 1.6 ± 0.2 mm/day, respectively; P ≤ 0.05). Despite the similarities in CL size, the average P4 concentration was higher in the A-TCR (9.6 ± 1.4 ng/mL) than in the A-control (4.0 ± 1.0 ng/mL) and C-TCR (4.4 ± 1.0 ng/mL) groups (P < 0.05). From these results, we conclude that eCG treatment and TCR improved the fertility of anestrous cows by providing follicular growth rates and size of dominant follicles similar to cyclic cows. Additionally, TCR increases the plasma concentrations of P4 in anestrous cows
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Traditional logic gates are rapidly reaching the limits of miniaturization. Overheating of these components is no longer negligible. A new physical approach to the machine was proposed by Prof. C S. Lent “Molecular Quantum cellular automata”. Indeed the quantum-dot cellular automata (QCA) approach offers an attractive alternative to diode or transistor devices. Th units encode binary information by two polarizations without corrent flow. The units for QCA theory are called QCA cells and can be realized in several way. Molecules can act as QCA cells at room temperature. In collaboration with STMicroelectronic, the group of Electrochemistry of Prof. Paolucci and the Nananotecnology laboratory from Lecce, we synthesized and studied with many techniques surface-active chiral bis-ferrocenes, conveniently designed in order to act as prototypical units for molecular computing devices. The chemistry of ferrocene has been studied thoroughly and found the opportunity to promote substitution reaction of a ferrocenyl alcohols with various nucleophiles without the aid of Lewis acid as catalysts. The only interaction between water and the two reagents is involve in the formation of a carbocation specie which is the true reactive species. We have generalized this concept to other benzyl alcohols which generating stabilized carbocations. Carbocation describe in Mayr’s scale were fondametal for our research. Finally, we used these alcohols to alkylate in enantioselective way aldehydes via organocatalysis.
Resumo:
The main aims of my PhD research work have been the investigation of the redox, photophysical and electronic properties of carbon nanotubes (CNT) and their possible uses as functional substrates for the (electro)catalytic production of oxygen and as molecular connectors for Quantum-dot Molecular Automata. While for CNT many and diverse applications in electronics, in sensors and biosensors field, as a structural reinforcing in composite materials have long been proposed, the study of their properties as individual species has been for long a challenging task. CNT are in fact virtually insoluble in any solvent and, for years, most of the studies has been carried out on bulk samples (bundles). In Chapter 2 an appropriate description of carbon nanotubes is reported, about their production methods and the functionalization strategies for their solubilization. In Chapter 3 an extensive voltammetric and vis-NIR spectroelectrochemical investigation of true solutions of unfunctionalized individual single wall CNT (SWNT) is reported that permitted to determine for the first time the standard electrochemical potentials of reduction and oxidation as a function of the tube diameter of a large number of semiconducting SWNTs. We also established the Fermi energy and the exciton binding energy for individual tubes in solution and, from the linear correlation found between the potentials and the optical transition energies, one to calculate the redox potentials of SWNTs that are insufficiently abundant or absent in the samples. In Chapter 4 we report on very efficient and stable nano-structured, oxygen-evolving anodes (OEA) that were obtained by the assembly of an oxygen evolving polyoxometalate cluster, (a totally inorganic ruthenium catalyst) with a conducting bed of multiwalled carbon nanotubes (MWCNT). Here, MWCNT were effectively used as carrier of the polyoxometallate for the electrocatalytic production of oxygen and turned out to greatly increase both the efficiency and stability of the device avoiding the release of the catalysts. Our bioinspired electrode addresses the major challenge of artificial photosynthesis, i.e. efficient water oxidation, taking us closer to when we might power the planet with carbon-free fuels. In Chapter 5 a study on surface-active chiral bis-ferrocenes conveniently designed in order to act as prototypical units for molecular computing devices is reported. Preliminary electrochemical studies in liquid environment demonstrated the capability of such molecules to enter three indistinguishable oxidation states. Side chains introduction allowed to organize them in the form of self-assembled monolayers (SAM) onto a surface and to study the molecular and redox properties on solid substrates. Electrochemical studies on SAMs of these molecules confirmed their attitude to undergo fast (Nernstian) electron transfer processes generating, in the positive potential region, either the full oxidized Fc+-Fc+ or the partly oxidized Fc+-Fc species. Finally, in Chapter 6 we report on a preliminary electrochemical study of graphene solutions prepared according to an original procedure recently described in the literature. Graphene is the newly-born of carbon nanomaterials and is certainly bound to be among the most promising materials for the next nanoelectronic generation.
Resumo:
The assessment of the RAMS (Reliability, Availability, Maintainability and Safety) performances of system generally includes the evaluations of the “Importance” of its components and/or of the basic parameters of the model through the use of the Importance Measures. The analytical equations proposed in this study allow the estimation of the first order Differential Importance Measure on the basis of the Birnbaum measures of components, under the hypothesis of uniform percentage changes of parameters. The aging phenomena are introduced into the model by assuming exponential-linear or Weibull distributions for the failure probabilities. An algorithm based on a combination of MonteCarlo simulation and Cellular Automata is applied in order to evaluate the performance of a networked system, made up of source nodes, user nodes and directed edges subjected to failure and repair. Importance Sampling techniques are used for the estimation of the first and total order Differential Importance Measures through only one simulation of the system “operational life”. All the output variables are computed contemporaneously on the basis of the same sequence of the involved components, event types (failure or repair) and transition times. The failure/repair probabilities are forced to be the same for all components; the transition times are sampled from the unbiased probability distributions or it can be also forced, for instance, by assuring the occurrence of at least a failure within the system operational life. The algorithm allows considering different types of maintenance actions: corrective maintenance that can be performed either immediately upon the component failure or upon finding that the component has failed for hidden failures that are not detected until an inspection; and preventive maintenance, that can be performed upon a fixed interval. It is possible to use a restoration factor to determine the age of the component after a repair or any other maintenance action.
Resumo:
Die bei Lern- und Gedächtnisvorgängen ablaufenden neurobiologischen Prozesse sind in ihrer Funktion bis heute nur unzureichend verstanden, wobei besonders die Rolle der lernabhängigen Genexpression unklar ist. Wiederholungen im Lernprozess fördern die Bildung von stabilen Gedächtnisinhalten. Die Lerneffiktivität kann hierbei durch lernfreie Zeitintervalle, insbesondere durch eingeschobene Schalfperioden, zusätzlich gesteigert werden. Entsprechend kann man den mehrtägigen Morris Water Maze (MWM)-Test mit einer verborgenen Plattform als einen mehrstufigen räumlichen Lernprozess bezeichnen. Dieser Test ist Hippokampus-abhängig und produziert Langzeitgedächtnisspuren in Nagern. Für diese Studie wurden FVB/NxC57Bl/6-Mäuse der F1-Generation über vier Tage in der MWM trainiert, das Erlernte in einem Probe Trial an Tag 5 überprüft und die Tiere gemäß ihrer Lernleistung in die beiden Gruppen „gute“ und „schlechte Lerner“ eingeteilt. Eine Analyse der hippokampalen Expression von Kandidatengenen per Microarray und Real-Time PCR erfolgte eine, sechs beziehungsweise 24 Stunden nach dem jeweils letzten Trainingslauf eines Tages. Durch den Vergleich von Schwimmkontrollen mit Test-naiven Mäusen wurde eine gleichgeschaltete, mit dem impliziten Lernen der MWM-Erfahrung der Tiere assoziierte unspezifische Genexpression festgestellt. Beim Vergleich der Schwimmkontrollen (ohne Plattform) mit den trainierten Tieren (verborgene Plattform mit konstanter Lokalisation) wurde in guten Lernern zu bestimmten Zeitpunkten eine Hochregulation von Genen, die mit Lernen und Gedächtnis (PP1, Kibra), neuronaler Aktivität (mt-CO1), Epigenetik (Dnmt3a, Dnmt3b) und neurodegenerativen Erkrankungen (Mapt, Sorl1) assoziiert sind, gefunden. Im Hippokampus der schlechten Lerner wurde eine im Vergleich zu den guten Lernern gesteigerte Synthese der mRNA von Genen festgestellt, die mit Lernen und Gedächtnis (Reelin, PP1, Kibra), Epigenetik (Dnmt1, Dnmt3a, Dnmt3b) und neurodegenerativen Erkrankungen (Mapt, Sorl1, APP) in Zusammenhang stehen. Diese Studie liefert somit den bisher ersten Hinweis, dass während eines mehrtägigen MWM-Lernprozesses eine abnormal erhöhte de novo-mRNA-Synthese mit verminderter Lernleistung in Zusammenhang steht.
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
La maggior parte dei pazienti che vengono sottoposti a interventi chirurgici per tumori solidi hanno un’età superiore a 70 anni1. Circa il 30% di questi pazienti vengono considerati “fragili”. Questi infatti presentano numerose comorbidità ed hanno un più elevato rischio di sviluppare complicanze postoperatorie con perdita della riserva funzionale residua. Per questo non esistono sistemi semplici di screening che permettano ai medici responsabili del trattamento di identificare questi pazienti con aumentato rischio postoperatorio. Identificare i pazienti a rischio è infatti il primo passo nel processo attraverso il quale è possibile prevenire in necessarie complicanze postoperatorie come delirio, eventi cardiovascolari e perdita della funzionalità complessiva con conseguente perdita di autonomia. Scopo di questo studio è quello di confrontare l’accuratezza nella previsione di mortalità e morbidità a 30 giorni dei tre test preditivi “Groningen Frailty Index” (GFI); “Vulnerable Elders Survey” (VES-13); “timed up and go test” con alcune componenti del Preoperative Assessment of Cancer in the Elderly (PACE). Lo studio verrà effettuato sui pazienti con età maggiore di 70 anni che dovranno essere sottoposti a intervento chirurgico in anestesia generale per la presenza di una neoplasia solida.
Resumo:
Automatic design has become a common approach to evolve complex networks, such as artificial neural networks (ANNs) and random boolean networks (RBNs), and many evolutionary setups have been discussed to increase the efficiency of this process. However networks evolved in this way have few limitations that should not be overlooked. One of these limitations is the black-box problem that refers to the impossibility to analyze internal behaviour of complex networks in an efficient and meaningful way. The aim of this study is to develop a methodology that make it possible to extract finite-state automata (FSAs) descriptions of robot behaviours from the dynamics of automatically designed complex controller networks. These FSAs unlike complex networks from which they're extracted are both readable and editable thus making the resulting designs much more valuable.
Resumo:
Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock–Cooper–Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes. Here we consider a triplet model of spike-timing–dependent plasticity (STDP) that depends on the interactions of three precisely timed spikes. Triplet STDP has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture. In the case of rate-based patterns, we show a tight correspondence between the triplet STDP rule and the BCM rule. We analytically demonstrate the selectivity property of the triplet STDP rule for orthogonal inputs and perform numerical simulations for nonorthogonal inputs. Moreover, in contrast to BCM, we show that triplet STDP can also induce selectivity for input patterns consisting of higher-order spatiotemporal correlations, which exist in natural stimuli and have been measured in the brain. We show that this sensitivity to higher-order correlations can be used to develop direction and speed selectivity.