898 resultados para Time and hardware redundancy
Resumo:
To analyse the associations between high screen time and overweight, poor dietary habits and physical activity in adolescents according to sex. The study comprised 515 boys and 716 girls aged 14-17 years from Londrina, Brazil. Nutritional status (normal weight or overweight/obese) was assessed by calculating the body mass index. Eating habits and time spent in physical activity were reported using a questionnaire. The measurement of screen time considered the time spent watching television, using a computer and playing video games during a normal week. Associations between high screen time and dependent variables (nutritional status, eating habits and physical activity levels) were assessed by binary logistic regression, adjusted for sociodemographic and lifestyle variables. Most adolescents (93.8% of boys and 87.2% of girls) spent more than 2 hours per day in screen-time activities. After adjustments, an increasing trend in the prevalence of overweight and physical inactivity with increasing time spent on screen activities was observed for both sexes. Screen times of >4 hours/day compared with <2 hours/day were associated with physical inactivity, low consumption of vegetables and high consumption of sweets only in girls and the consumption of soft drinks in both sexes. The frequency of overweight and physical inactivity increased with increasing screen time in a trending manner and independently of the main confounders. The relationship between high screen time and poor eating habits was particularly relevant for adolescent girls.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Educational institutions of all levels invest large amounts of time and resources into instructional technology, with the goal of enhancing the educational effectiveness of the learning environment. The decisions made by instructors and institutions regarding the implementation of technology are guided by perceptions of usefulness held by those who are in control. The primary objective of this mixed methods study was to examine the student and faculty perceptions of technology being used in general education courses at a community college. This study builds upon and challenges the assertions of writers such as Prensky (2001a, 2001b) and Tapscott (1998) who claim that a vast difference in technology perception exists between generational groups, resulting in a diminished usefulness of technology in instruction. In this study, data were gathered through student surveys and interviews, and through faculty surveys and interviews. Analysis of the data used Kendall’s Tau test for correlation between various student and faculty variables in various groupings, and also typological analysis of the transcribed interview data. The analysis of the quantitative data revealed no relationship between age and perception of technology’s usefulness. A positive relationship was found to exist between the perception of the frequency of technology use and the perception of technology’s effectiveness, suggesting that both faculty members and students believed that the more technology is used, the more useful it is in instruction. The analysis of the qualitative data revealed that both faculty and students perceive technology to be useful, and that the most significant barriers to technology’s usefulness include faulty hardware and software systems,lack of user support, and lack of training for faculty. The results of the study suggest that the differences in perception of technology between generations that are proposed by Prensky may not exist when comparing adults from the younger generation with adults from the older generation. Further, the study suggests that institutions continue to invest in instructional technology, with a focus on high levels of support and training for faculty, and more universal availability of specific technologies, including web access, in class video, and presentation software. Adviser: Ronald Joekel
Resumo:
Objectives: To determine the micro-hardness profile of two dual cure resin cements (RelyX - U100 (R), 3M-ESPE and Panavia F 2.0 (R), Kuraray) used for cementing fiber-reinforced resin posts (Fibrekor (R) - Jeneric Pentron) under three different curing protocols and two water storage times. Material and methods: Sixty 16mm long bovine incisor roots were endodontically treated and prepared for cementation of the Fibrekor posts. The cements were mixed as instructed, dispensed in the canal, the posts were seated and the curing performed as follows: a) no light activation; b) light-activation immediately after seating the post, and; c) light-activation delayed 5 minutes after seating the post. The teeth were stored in water and retrieved for analysis after 7 days and 3 months. The roots were longitudinally sectioned and the microhardness was determined at the cervical, middle and apical regions along the cement line. The data was analyzed by the three-way ANOVA test (curing mode, storage time and thirds) for each cement. The Tukey test was used for the post-hoc analysis. Results: Light-activation resulted in a significant increase in the microhardness. This was more evident for the cervical region and for the Panavia cement. Storage in water for 3 months caused a reduction of the micro-hardness for both cements. The U100 cement showed less variation in the micro-hardness regardless of the curing protocol and storage time. Conclusions: The micro-hardness of the cements was affected by the curing and storage variables and were material-dependent.
Resumo:
L. Antonangelo, F. S. Vargas, M. M. P. Acencio, A. P. Cora, L. R. Teixeira, E. H. Genofre and R. K. B. Sales Effect of temperature and storage time on cellular analysis of fresh pleural fluid samples Objective: Despite the methodological variability in preparation techniques for pleural fluid cytology, it is fundamental that the cells should be preserved, permitting adequate morphological classification. We evaluated numerical and morphological changes in pleural fluid specimens processed after storage at room temperature or under refrigeration. Methods: Aliquots of pleural fluid from 30 patients, collected in ethylenediaminetetraacetic acid-coated tubes and maintained at room temperature (21 degrees C) or refrigeration (4 degrees C) were evaluated after 2 and 6 hours and 1, 2, 3, 4, 7 and 14 days. Evaluation of cytomorphology and global and percentage counts of leucocytes, macrophages and mesothelial cells were included. Results: The samples had quantitative cellular variations from day 3 or 4 onwards, depending on the storage conditions. Morphological alterations occurred earlier in samples maintained at room temperature (day 2) than in those under refrigeration (day 4). Conclusions: This study confirms that storage time and temperature are potential pre-analytical causes of error in pleural fluid cytology.
Resumo:
OBJECTIVE: To evaluate the association between tourniquet and total operative time during total knee arthroplasty and the occurrence of deep vein thrombosis. METHODS: Seventy-eight consecutive patients from our institution underwent cemented total knee arthroplasty for degenerative knee disorders. The pneumatic tourniquet time and total operative time were recorded in minutes. Four categories were established for total tourniquet time: <60, 61 to 90, 91 to 120, and >120 minutes. Three categories were defined for operative time: <120, 121 to 150, and >150 minutes. Between 7 and 12 days after surgery, the patients underwent ascending venography to evaluate the presence of distal or proximal deep vein thrombosis. We evaluated the association between the tourniquet time and total operative time and the occurrence of deep vein thrombosis after total knee arthroplasty. RESULTS: In total, 33 cases (42.3%) were positive for deep vein thrombosis; 13 (16.7%) cases involved the proximal type. We found no statistically significant difference in tourniquet time or operative time between patients with or without deep vein thrombosis. We did observe a higher frequency of proximal deep vein thrombosis in patients who underwent surgery lasting longer than 120 minutes. The mean total operative time was also higher in patients with proximal deep vein thrombosis. The tourniquet time did not significantly differ in these patients. CONCLUSION: We concluded that surgery lasting longer than 120 minutes increases the risk of proximal deep vein thrombosis.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
To evaluate the biocompatibility and the setting time of Portland cement clinker with or without 2% or 5% calcium sulfate and MTA-CPM. Twenty-four mice (Rattus norvegicus) received subcutaneously polyethylene tubes filled with Portland cement clinker with or without 2% or 5% calcium sulfate and MTA. After 15, 30 and 60 days of implantation, the animals were killed and specimens were prepared for microscopic analysis. For evaluation of the setting time, each material was analyzed using Gilmore needles weighing 113.5 g and 456.5 g, according to the ASTM specification Number C266-08 guideline. Data were analyzed by ANOVA and Tukey's test for setting time and Kruskal-Wallis and Dunn test for biocompatibility at 5% significance level. Histologic observation showed no statistically significant difference of biocompatibility (p>0.05) among the materials in the subcutaneous tissues. For the setting time, clinker without calcium sulfate showed the shortest initial and final setting times (6.18 s/21.48 s), followed by clinker with 2% calcium sulfate (9.22 s/25.33 s), clinker with 5% calcium sulfate (10.06 s/42.46 s) and MTA (15.01 s/42.46 s). All the tested materials showed biocompatibility and the calcium sulfate absence shortened the initial and final setting times of the white Portland cement clinker
Resumo:
Organic electronics has grown enormously during the last decades driven by the encouraging results and the potentiality of these materials for allowing innovative applications, such as flexible-large-area displays, low-cost printable circuits, plastic solar cells and lab-on-a-chip devices. Moreover, their possible field of applications reaches from medicine, biotechnology, process control and environmental monitoring to defense and security requirements. However, a large number of questions regarding the mechanism of device operation remain unanswered. Along the most significant is the charge carrier transport in organic semiconductors, which is not yet well understood. Other example is the correlation between the morphology and the electrical response. Even if it is recognized that growth mode plays a crucial role into the performance of devices, it has not been exhaustively investigated. The main goal of this thesis was the finding of a correlation between growth modes, electrical properties and morphology in organic thin-film transistors (OTFTs). In order to study the thickness dependence of electrical performance in organic ultra-thin-film transistors, we have designed and developed a home-built experimental setup for performing real-time electrical monitoring and post-growth in situ electrical characterization techniques. We have grown pentacene TFTs under high vacuum conditions, varying systematically the deposition rate at a fixed room temperature. The drain source current IDS and the gate source current IGS were monitored in real-time; while a complete post-growth in situ electrical characterization was carried out. At the end, an ex situ morphological investigation was performed by using the atomic force microscope (AFM). In this work, we present the correlation for pentacene TFTs between growth conditions, Debye length and morphology (through the correlation length parameter). We have demonstrated that there is a layered charge carriers distribution, which is strongly dependent of the growth mode (i.e. rate deposition for a fixed temperature), leading to a variation of the conduction channel from 2 to 7 monolayers (MLs). We conciliate earlier reported results that were apparently contradictory. Our results made evident the necessity of reconsidering the concept of Debye length in a layered low-dimensional device. Additionally, we introduce by the first time a breakthrough technique. This technique makes evident the percolation of the first MLs on pentacene TFTs by monitoring the IGS in real-time, correlating morphological phenomena with the device electrical response. The present thesis is organized in the following five chapters. Chapter 1 makes an introduction to the organic electronics, illustrating the operation principle of TFTs. Chapter 2 presents the organic growth from theoretical and experimental points of view. The second part of this chapter presents the electrical characterization of OTFTs and the typical performance of pentacene devices is shown. In addition, we introduce a correcting technique for the reconstruction of measurements hampered by leakage current. In chapter 3, we describe in details the design and operation of our innovative home-built experimental setup for performing real-time and in situ electrical measurements. Some preliminary results and the breakthrough technique for correlating morphological and electrical changes are presented. Chapter 4 meets the most important results obtained in real-time and in situ conditions, which correlate growth conditions, electrical properties and morphology of pentacene TFTs. In chapter 5 we describe applicative experiments where the electrical performance of pentacene TFTs has been investigated in ambient conditions, in contact to water or aqueous solutions and, finally, in the detection of DNA concentration as label-free sensor, within the biosensing framework.
Resumo:
This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
The paralysis-by-analysis phenomenon, i.e., attending to the execution of one's movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf et al., 2001) or of step-by-step execution (Masters, 1992; Beilock et al., 2002), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention.
Resumo:
The precise timing of events in the brain has consequences for intracellular processes, synaptic plasticity, integration and network behaviour. Pyramidal neurons, the most widespread excitatory neuron of the neocortex have multiple spike initiation zones, which interact via dendritic and somatic spikes actively propagating in all directions within the dendritic tree. For these neurons, therefore, both the location and timing of synaptic inputs are critical. The time window for which the backpropagating action potential can influence dendritic spike generation has been extensively studied in layer 5 neocortical pyramidal neurons of rat somatosensory cortex. Here, we re-examine this coincidence detection window for pyramidal cell types across the rat somatosensory cortex in layers 2/3, 5 and 6. We find that the time-window for optimal interaction is widest and shifted in layer 5 pyramidal neurons relative to cells in layers 6 and 2/3. Inputs arriving at the same time and locations will therefore differentially affect spike-timing dependent processes in the different classes of pyramidal neurons.
Resumo:
This thesis explores system performance for reconfigurable distributed systems and provides an analytical model for determining throughput of theoretical systems based on the OpenSPARC FPGA Board and the SIRC Communication Framework. This model was developed by studying a small set of variables that together determine a system¿s throughput. The importance of this model is in assisting system designers to make decisions as to whether or not to commit to designing a reconfigurable distributed system based on the estimated performance and hardware costs. Because custom hardware design and distributed system design are both time consuming and costly, it is important for designers to make decisions regarding system feasibility early in the development cycle. Based on experimental data the model presented in this paper shows a close fit with less than 10% experimental error on average. The model is limited to a certain range of problems, but it can still be used given those limitations and also provides a foundation for further development of modeling reconfigurable distributed systems.