929 resultados para Program Analysis
Resumo:
Purpose. To use a randomized design to evaluate the effectiveness of voice training programs for telemarketers via multidimensional analysis. Methods. Forty-eight telemarketers were randomly assigned to two groups: voice training group (n = 14) who underwent training over an 8-week period and a nontraining control group (n = 34). Before and after training, recordings of the sustained vowel /epsilon/ and connected were collected for acoustic and perceptual analyses. Results. Based on pre- and posttraining comparisons, the voice training group presented with a significant reduction in percent jitter (P = 0.044). No other significant differences were observed, and inter-rater reliability varied from poor to fair. Conclusions. These findings suggest that voice training improved a single acoustic dimension, but do not change perceptual dimension of telemarketers' voices.
Resumo:
Two myotoxic and noncatalytic Lys49-phospholipases A2 (braziliantoxin-II and MT-II) and a myotoxic and catalytic phospholipase A2 (braziliantoxin-III) from the venom of the Amazonian snake Bothrops brazili were crystallized. The crystals diffracted to resolutions in the range 2.562.05 angstrom and belonged to space groups P3121 (braziliantoxin-II), P6522 (braziliantoxin-III) and P21 (MT-II). The structures were solved by molecular-replacement techniques. Both of the Lys49-phospholipases A2 (braziliantoxin-II and MT-II) contained a dimer in the asymmetric unit, while the Asp49-phospholipase A2 braziliantoxin-III contained a monomer in its asymmetric unit. Analysis of the quaternary assemblies of the braziliantoxin-II and MT-II structures using the PISA program indicated that both models have a dimeric conformation in solution. The same analysis of the braziliantoxin-III structure indicated that this protein does not dimerize in solution and probably acts as a monomer in vivo, similar to other snake-venom Asp49-phospholipases A2.
Resumo:
Background Falling in older age is a major public health concern due to its costly and disabling consequences. However very few randomised controlled trials (RCTs) have been conducted in developing countries, in which population ageing is expected to be particularly substantial in coming years. This article describes the design of an RCT to evaluate the effectiveness of a multifactorial falls prevention program in reducing the rate of falls in community-dwelling older people. Methods/design Multicentre parallel-group RCT involving 612 community-dwelling men and women aged 60 years and over, who have fallen at least once in the previous year. Participants will be recruited in multiple settings in Sao Paulo, Brazil and will be randomly allocated to a control group or an intervention group. The usual care control group will undergo a fall risk factor assessment and be referred to their clinicians with the risk assessment report so that individual modifiable risk factors can be managed without any specific guidance. The intervention group will receive a 12-week Multifactorial Falls Prevention Program consisting of: an individualised medical management of modifiable risk factors, a group-based, supervised balance training exercise program plus an unsupervised home-based exercise program, an educational/behavioral intervention. Both groups will receive a leaflet containing general information about fall prevention strategies. Primary outcome measures will be the rate of falls and the proportion of fallers recorded by monthly falls diaries and telephone calls over a 12 month period. Secondary outcomes measures will include risk of falling, fall-related self-efficacy score, measures of balance, mobility and strength, fall-related health services use and independence with daily tasks. Data will be analysed using the intention-to-treat principle.The incidence of falls in the intervention and control groups will be calculated and compared using negative binomial regression analysis. Discussion This study is the first trial to be conducted in Brazil to evaluate the effectiveness of an intervention to prevent falls. If proven to reduce falls this study has the potential to benefit older adults and assist health care practitioners and policy makers to implement and promote effective falls prevention interventions. Trial registration ClinicalTrials.gov (NCT01698580)
Resumo:
OBJECTIVE: To analyze the costs of human immunodeficiency virus (HIV) outpatient treatment for individuals with different CD4 cell counts in the Brazilian public health system, and to compare to costs in other national health systems. METHODS: A retrospective survey was conducted in five public outpatient clinics of the Brazilian national HIV program in the city of São Paulo. Data on healthcare services provided for a period of one year of HIV outpatient treatment were gathered from randomly selected medical records. Prices of inputs used were obtained through market research and public sector databases. Information on costs of HIV outpatient treatment in other national health systems were gathered from the literature. Annual costs of HIV outpatient treatment from each country were converted into 2010 U.S. dollars. RESULTS: Annual cost of HIV outpatient treatment for the Brazilian national public program was US$ 2,572.92 in 2006 in São Paulo, ranging from US$ 1,726.19 for patients with CD4 cell count > 500 to US$ 3,693.28 for patients with 51 < CD4 cell count < 200. Antiretrovirals (ARVs) represented approximately 62.0% of annual HIV outpatient costs. Comparing among different health systems during the same period, HIV outpatient treatment presented higher costs in countries where HIV treatment is provided by the private sector. CONCLUSION: The main cost drivers of HIV outpatient treatment in different health systems were: ARVs, other medications, health professional services, and diagnostic exams. Nevertheless, the magnitude of cost drivers varied among HIV outpatient treatment programs due to health system efficiency. The data presented may be a valuable tool for public policy evaluation of HIV treatment programs worldwide.
Resumo:
AIM: To explore the biomechanical effects of the different implantation bone levels of Morse taper implants, employing a finite element analysis (FEA). METHODS: Dental implants (TitamaxCM) with 4x13 mm and 4x11 mm, and their respective abutments with 3.5 mm height, simulating a screwed premolar metal-ceramic crown, had their design performed using the software AnsysWorkbench 10.0. They were positioned in bone blocks, covered by 2.5 mm thickness of mucosa. The cortical bone was designed with 1.5 mm thickness and the trabecular bone completed the bone block. Four groups were formed: group 11CBL (11 mm implant length on cortical bone level), group 11TBL (11 mm implant length on trabecular bone level), group 13CBL (13mm implant length on cortical bone level) and group 13TBL (13 mm implant length on trabecular bone level). Oblique 200 N loads were applied. Von Mises equivalent stresses in cortical and trabecular bones were evaluated with the same design program. RESULTS: The results were shown qualitatively and quantitatively by standard scales for each type of bone. By the results obtained, it can be suggested that positioning the implant completely in trabecular bone brings harm with respect to the generated stresses. Its implantation in the cortical bone has advantages with respect to better anchoring and locking, reflecting a better dissipation of the stresses along the implant/bone interfaces. In addition, the search for anchoring the implant in its apical region in cortical bone is of great value to improve stabilization and consequently better stress distribution. CONCLUSIONS: The implant position slightly below the bone in relation to the bone crest brings advantages as the best long-term predictability with respect to the expected neck bone loss.
Resumo:
INTRODUCTION: With the aim of searching genetic factors associated with the response to an immune treatment based on autologous monocyte-derived dendritic cells pulsed with autologous inactivated HIV, we performed exome analysis by screening more than 240,000 putative functional exonic variants in 18 HIV-positive Brazilian patients that underwent the immune treatment. METHODS: Exome analysis has been performed using the ILLUMINA Infinium HumanExome BeadChip. zCall algorithm allowed us to recall rare variants. Quality control and SNP-centred analysis were done with GenABEL R package. An in-house implementation of the Wang method permitted gene-centred analysis. RESULTS: CCR4-NOT transcription complex, subunit 1 (CNOT1) gene (16q21), showed the strongest association with the modification of the response to the therapeutic vaccine (p=0.00075). CNOT1 SNP rs7188697 A/G was significantly associated with DC treatment response. The presence of a G allele indicated poor response to the therapeutic vaccine (p=0.0031; OR=33.00; CI=1.74-624.66), and the SNP behaved in a dominant model (A/A vs. A/G+G/G p=0.0009; OR=107.66; 95% CI=3.85-3013.31), being the A/G genotype present only in weak/transient responders, conferring susceptibility to poor response to the immune treatment. DISCUSSION: CNOT1 is known to be involved in the control of mRNA deadenylation and mRNA decay. Moreover, CNOT1 has been recently described as being involved in the regulation of inflammatory processes mediated by tristetraprolin (TTP). The TTP-CCR4-NOT complex (CNOT1 in the CCR4-NOT complex is the binding site for TTP) has been reported as interfering with HIV replication, through post-transcriptional control. Therefore, we can hypothesize that genetic variation occurring in the CNOT1 gene could impair the TTP-CCR4-NOT complex, thus interfering with HIV replication and/or host immune response. CONCLUSIONS: Being aware that our findings are exclusive to the 18 patients studied with a need for replication, and that the genetic variant of CNOT1 gene, localized at intron 3, has no known functional effect, we propose a novel potential candidate locus for the modulation of the response to the immune treatment, and open a discussion on the necessity to consider the host genome as another potential variant to be evaluated when designing an immune therapy study
Resumo:
The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
This work presents a program for simulations of vehicle-track and vehicle-trackstructure dynamic interaction . The method used is computationally efficient in the sense that a reduced number of coordinates is sufficient and doesn’t require high efficiency computers. The method proposes a modal substructuring approach of the system by modelling rails , sleepers and underlying structure with modal coordinates, the vehicle with physical lumped elements coordinates and by introducing interconnection elements between these structures (wheel-rail contact, railpads and ballast) by means of their interaction forces. The Frequency response function (FRF) is also calculated for both cases of track over a structure (a bridge, a viaduct ...) and for the simple vehicle-track program; for each case the vehicle effect on the FRF is then analyzed through the comparison of the FRFs obtained introducing or not a simplified vehicle on the system.
Resumo:
The increasing use of Fiber Reinforced methods for strengthening existing brick masonry walls and columns, especially for the rehabilitation of historical buildings, has generated considerable research interest in understanding the failure mechanism in such systems. This dissertation is aimed to provide a basic understanding of the behavior of solid brick masonry walls unwrapped and wrapped with Fiber Reinforced Cementitious Matrix Composites. This is a new type of composite material, commonly known as FRCM, featuring a cementitious inorganic matrix (binder) instead of the more common epoxy one. The influence of the FRCM-reinforcement on the load-carrying capacity and strain distribution during compression test will be investigated using a full-field optical technique known as Digital Image Correlation. Compression test were carried on 6 clay bricks columns and on 7 clay brick walls in three different configuration, casted using bricks scaled respect the first one with a ratio 1:2, in order to determinate the effects of FRCM reinforcement. The goal of the experimental program is to understand how the behavior of brick masonry will be improved by the FRCM-wrapping. The results indicate that there is an arching action zone represented in the form of a parabola with a varying shape according to the used configuration. The area under the parabolas is considered as ineffectively confined. The effectively confined area is assumed to occur within the region where the arching action had been fully developed.
Resumo:
The general aim of this work is to contribute to the energy performance assessment of ventilated façades by the simultaneous use of experimental data and numerical simulations. A significant amount of experimental work was done on different types of ventilated façades with natural ventilation. The measurements were taken on a test building. The external walls of this tower are rainscreen ventilated façades. Ventilation grills are located at the top and at the bottom of the tower. In this work the modelling of the test building using a dynamic thermal simulation program (ESP-r) is presented and the main results discussed. In order to investigate the best summer thermal performance of rainscreen ventilated skin façade a study for different setups of rainscreen walls was made. In particular, influences of ventilation grills, air cavity thickness, skin colour, skin material, orientation of façade were investigated. It is shown that some types of rainscreen ventilated façade typologies are capable of lowering the cooling energy demand of a few percent points.
Resumo:
The systematic exploration of excited meson and baryon states was the central topic of the COMPASS physics program in the years 2008 and 2009 at the CERN facility. A hadron beam of 190 GeV/c particle momentum was impinging on a 40 cm long liquid hydrogen target to create excited states of beam particles by diffractive processes. The presented work is about the study of the process $K^- p rightarrow K^- pi^+ pi^- p_{recoil}$ where special emphasis is put on how kaons were distinguished from pions with the CEDAR detectors in the initial channel as well as with the RICH detector in the final states. At the end formed 270 000 events an invariant K pi pi mass distribution of overlapping resonances. In addition a detailed MC simulation study of 44 million decays in the range of 0.8 < m(K pi pi) [GeV/c^2] < 3.0 was performed and analysed for acceptance corrections.All information was combined into a mass independent partial wave analysis to observe resonances of individual particles. The main contribution was found in the JP = 0+, 1+, 2- and 2+ spin-parity states.
Resumo:
The aim of this work was to identify markers associated with production traits in the pig genome using different approaches. We focused the attention on Italian Large White pig breed using Genome Wide Association Studies (GWAS) and applying a selective genotyping approach to increase the power of the analyses. Furthermore, we searched the pig genome using Next Generation Sequencing (NSG) Ion Torrent Technology to combine selective genotyping approach and deep sequencing for SNP discovery. Other two studies were carried on with a different approach. Allele frequency changes for SNPs affecting candidate genes and at Genome Wide level were analysed to identify selection signatures driven by selection program during the last 20 years. This approach confirmed that a great number of markers may affect production traits and that they are captured by the classical selection programs. GWAS revealed 123 significant or suggestively significant SNP associated with Back Fat Thickenss and 229 associated with Average Daily Gain. 16 Copy Number Variant Regions resulted more frequent in lean or fat pigs and showed that different copies of those region could have a limited impact on fat. These often appear to be involved in food intake and behavior, beside affecting genes involved in metabolic pathways and their expression. By combining NGS sequencing with selective genotyping approach, new variants where discovered and at least 54 are worth to be analysed in association studies. The study of groups of pigs undergone to stringent selection showed that allele frequency of some loci can drastically change if they are close to traits that are interesting for selection schemes. These approaches could be, in future, integrated in genomic selection plans.
Resumo:
The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.