893 resultados para research activity - patterns
Resumo:
OBJETIVO Analisar o padrão de atividade física de gestantes de baixo risco e os fatores associados. MÉTODOS Estudo transversal com 256 gestantes adultas no segundo trimestre gestacional, sorteadas dentre as assistidas pelas unidades de atenção primária à saúde do município de Botucatu, SP, em 2010. As atividades físicas foram investigadas por meio do “pregnancy physical activity questionnaire”, verificando-se tempo e intensidade de atividades ocupacionais, de deslocamento, domésticas e de lazer, expressos em equivalentes metabólicos dia. As gestantes foram classificadas segundo nível de atividade e em relação a atingir 150 min/semana de atividades físicas de lazer, variáveis dependentes do estudo. A associação entre essas variáveis e as socioeconômicas, características maternas, fatores comportamentais e modelo de atenção da unidade de saúde foi avaliada mediante modelos de regressão de Poisson com variância robusta, adotando-se modelo hierárquico. RESULTADOS A maior parte das gestantes era insuficientemente ativa (77,7%), 12,5% moderadamente ativa e 9,8% vigorosamente ativa. Os maiores gastos diários de energia foram com atividades domésticas, seguidas pelas atividades de locomoção; 10,2% atingiram a recomendação de 150 min semanais de atividades físicas de lazer. Trabalho fora de casa reduziu a chance de atingir essa recomendação (RP = 0,39, IC95% 0,16;0,93). Ter tido pelo menos um parto anterior (RP = 0,87, IC95% 0,77;0,99) e excesso ponderal pré-gestacional (RP = 0,85, IC95% 0,731;0,99) reduziram a chance de ser insuficientemente ativa, enquanto consumir menos alimentos saudáveis teve aumento discreto (RP = 1,18, IC95% 1,02;1,36). CONCLUSÕES Gestantes assistidas na atenção primária à saúde são insuficientemente ativas. Ter tido pelo menos um parto e apresentar sobrepeso pré-gestacional foram identificados como fatores protetores contra tal situação, enquanto consumo menos frequente de alimentos saudáveis foi fator de risco, sugerindo aglomeração de fatores de risco à saúde.
Resumo:
Pós-graduação em Biologia Animal - IBILCE
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The concept of gender is relational and refers to the the overcoming of the biological determinism, it has historical and cultural roots going beyond the anatomical differences between men and women. This documental research investigated patterns of sexuality and gender in sexual education books for children and teenagers. Ten books were analyzed, selected in bookstores websites. The content of the books and their illustrations show: 1) stereotypical and naturalized view of femininity and masculinity, 2) patterns of traditional and patriarchal families, 3) romanticized concept of marriage and reproduction, 4) stereotypes of beauty and "normal" body. It is concluded that sexual education books for children and teenagers reproduce normative standards that can stimulate a sexist education and should not be used in works of sexual education without critical reflection.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This research presents the description of the assembly, development and execution of an experiment and of an interdisciplinary and student research activity built by some undergraduates from Biological Sciences, Physics, Mathematics and Chemistry PIBID subprojects of the Faculty of Science, FC - UNESP, Bauru/SP Campus. These subprojects are part of a single interdisciplinary group, which has as its main activity the Superclass, which consists of an intervention of the PIBID members during the class schedule, with an interdisciplinary and student research didactic sequence, applied by four groups of ten members, in two local State Schools. The experiment treated in this assignment was a miniature cannon (minicannon), and it was used in a Superclass that had the War and Science Development as its theme, wherein the knowledge of all areas were built from the analysis of the multiple variables involved on the cannon shots. These variables were surveyed with the students through the experiment research, during the Superclass, being focused on three of them: the shot and the explosives; the trajectory; and the target. The experiment was extremely successful, because it could instigate the students, who were interested, questioning and debaters for several times during the class. The minicannon made possible the interdisciplinary and the student research work among the four areas of science involved. Despite it has been a laborious experiment in its construction, it has proven to be innovative, and its use has transformed the students‟ ingenuous curiosity into epistemological curiosity, a key factor in the construction of scientific knowledge, wherein the dialogue established in the Superclass explored this new curiosity
Resumo:
Objetivou-se descrever as rotinas de três famílias, que adotaram crianças com necessidades especiais, que tinham conhecimento prévio dessa condição infantil. Utilizou-se o método de Estudo de Casos Múltiplos. Os dados foram obtidos por meio de Entrevista Semiestruturada (ES), do Inventário de Rotina (IR) e do Diário de Campo (DC). Quanto às semelhanças entre os grupos familiares, identificou-se que são comuns as atividades de alimentação/higiene, descanso e lazer, envolvendo a participação dos pais, irmãs e babás, geralmente nos ambientes domésticos da família. Observou-se diferenças importantes nos padrões de atividades, companhias e ambientes onde as rotinas ocorriam. Conclui-se que as variações nas rotinas estão relacionadas às particularidades de cada criança, à estrutura e nível socioeconômico de cada família participante.
Resumo:
Neurofeedback (NF) is a training to enhance self-regulatory capacity over brain activity patterns and consequently over brain mental states. Recent findings suggest that NF is a promising alternative for the treatment of attention-deficit/hyperactivity disorder (ADHD). We comprehensively reviewed literature searching for studies on the effectiveness and specificity of NF for the treatment of ADHD. In addition, clinically informative evidence-based data are discussed. We found 3 systematic review on the use of NF for ADHD and 6 randomized controlled trials that have not been included in these reviews. Most nonrandomized controlled trials found positive results with medium-to-large effect sizes, but the evidence for effectiveness are less robust when only randomized controlled studies are considered. The direct comparison of NF and sham-NF in 3 published studies have found no group differences, nevertheless methodological caveats, such as the quality of the training protocol used, sample size, and sample selection may have contributed to the negative results. Further data on specificity comes from electrophysiological studies reporting that NF effectively changes brain activity patterns. No safety issues have emerged from clinical trials and NF seems to be well tolerated and accepted. Follow-up studies support long-term effects of NF. Currently there is no available data to guide clinicians on the predictors of response to NF and on optimal treatment protocol. In conclusion, NF is a valid option for the treatment for ADHD, but further evidence is required to guide its use.
Resumo:
Members of the subfamily Crotalinae are considered to be essentially nocturnal and most of the data about these snakes have been collected from the field. Information on how nutritional status affects the movement rate and activity patterns is a key point to elucidating the ecophysiology of snakes. In this study, we distributed 28 lancehead Bothrops moojeni into three groups under distinct feeding regimens after a month of fasting. Groups were divided as follows: ingestion of meals weighing (A) 40%, (B) 20%, or (C) 10% of the snake body mass. Groups were monitored for five days before and after food intake and the activity periods and movement rates were recorded. Our results show that B. moojeni is prevalently nocturnal, and the activity peak occurs in the first three hours of the scotophase. After feeding, a significant decrease in activity levels in groups A and B was detected. The current results corroborate previous field data that describe B. moojeni as a nocturnal species with low movement rates. The relationship between motion and the amount of food consumed by the snake may be associated with its hunting strategy.
Resumo:
Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.
Resumo:
Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
This PhD thesis describes the application of some instrumental analytical techniques suitable to the study of fundamental food products for the human diet, such as: extra virgin olive oil and dairy products. These products, widely spread in the market and with high nutritional values, are increasingly recognized healthy properties although their lipid fraction might contain some unfavorable components to the human health. The research activity has been structured in the following investigations: “Comparison of different techniques for trans fatty acids analysis” “Fatty acids analysis of outcrop milk cream samples, with particular emphasis on the content of Conjugated Linoleic Acid (CLA) and trans Fatty Acids (TFA), by using 100m high-polarity capillary column” “Evaluation of the oxidited fatty acids (OFA) content during the Parmigiano-Reggiano cheese seasoning” “Direct analysis of 4-desmethyl sterols and two dihydroxy triterpenes in saponified vegetal oils (olive oil and others) using liquid chromatography-mass spectrometry” “Quantitation of long chain poly-unsatured fatty acids (LC-PUFA) in base infant formulas by Gas Chromatography, and evaluation of the blending phases accuracy during their preparation” “Fatty acids composition of Parmigiano Reggiano cheese samples, with emphasis on trans isomers (TFA)”