859 resultados para WORK METHODS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Direct smear examination with Ziehl-Neelsen (ZN) staining for the diagnosis of pulmonary tuberculosis (PTB) is cheap and easy to use, but its low sensitivity is a major drawback, particularly in HIV seropositive patients. As such, new tools for laboratory diagnosis are urgently needed to improve the case detection rate, especially in regions with a high prevalence of TB and HIV. Objective To evaluate the performance of two in house PCR (Polymerase Chain Reaction): PCR dot-blot methodology (PCR dot-blot) and PCR agarose gel electrophoresis (PCR-AG) for the diagnosis of Pulmonary Tuberculosis (PTB) in HIV seropositive and HIV seronegative patients. Methods A prospective study was conducted (from May 2003 to May 2004) in a TB/HIV reference hospital. Sputum specimens from 277 PTB suspects were tested by Acid Fast Bacilli (AFB) smear, Culture and in house PCR assays (PCR dot-blot and PCR-AG) and their performances evaluated. Positive cultures combined with the definition of clinical pulmonary TB were employed as the gold standard. Results The overall prevalence of PTB was 46% (128/277); in HIV+, prevalence was 54.0% (40/74). The sensitivity and specificity of PCR dot-blot were 74% (CI 95%; 66.1%-81.2%) and 85% (CI 95%; 78.8%-90.3%); and of PCR-AG were 43% (CI 95%; 34.5%-51.6%) and 76% (CI 95%; 69.2%-82.8%), respectively. For HIV seropositive and HIV seronegative samples, sensitivities of PCR dot-blot (72% vs 75%; p = 0.46) and PCR-AG (42% vs 43%; p = 0.54) were similar. Among HIV seronegative patients and PTB suspects, ROC analysis presented the following values for the AFB smear (0.837), Culture (0.926), PCR dot-blot (0.801) and PCR-AG (0.599). In HIV seropositive patients, these area values were (0.713), (0.900), (0.789) and (0.595), respectively. Conclusion Results of this study demonstrate that the in house PCR dot blot may be an improvement for ruling out PTB diagnosis in PTB suspects assisted at hospitals with a high prevalence of TB/HIV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Food handlers have a very important role in preventing food contamination during its preparation and distribution. This responsibility is even greater in hospitals, since a large number of patients have low immunity and consequently food contamination by pathogenic bacteria could be particularly harmful. Therefore, a good working environment and periodic training should be provided to food handlers by upper management. Methods This study is qualitative research by means of focus group and thematic content analysis methodologies to examine, in detail, the statements by food handlers working in the milk and specific-diet kitchens in a hospital to understand the problems they face in the workplace. Results We found that food handlers are aware of the role they play in restoring patients' health; they consider it important to offer a good-quality diet. However, according to their perceptions, a number of difficulties prevent them from reaching this aim. These include: upper management not prioritizing human and material resources to the dietetic services when making resource allocation decisions; a perception that upper management considers their work to be of lesser importance; delayed overtime payments; lack of periodic training; managers lacking administrative skills; insufficient dietitian staff assistants, leading to overwork, at the same time as there is an excess of dietitians; unhealthy environmental working conditions – high temperature, high humidity, loud and constant noise level, poor ventilation; lack of food, and kitchen utensils and equipment; and relationship conflicts with chief dieticians and co-workers. Conclusion From these findings, improvement in staff motivation could be achieved by considering non-financial incentives, such as improvement in working conditions and showing appreciation and respect through supervision, training and performance appraisal. Management action, such as investments in intermediary management so that managers have the capacity to provide supportive supervision, as well as better use of performance appraisal and access to training, may help overcome the identified problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background An estimated 10–20 million individuals are infected with the retrovirus human T-cell leukemia virus type 1 (HTLV-1). While the majority of these individuals remain asymptomatic, 0.3-4% develop a neurodegenerative inflammatory disease, termed HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP). HAM/TSP results in the progressive demyelination of the central nervous system and is a differential diagnosis of multiple sclerosis (MS). The etiology of HAM/TSP is unclear, but evidence points to a role for CNS-inflitrating T-cells in pathogenesis. Recently, the HTLV-1-Tax protein has been shown to induce transcription of the human endogenous retrovirus (HERV) families W, H and K. Intriguingly, numerous studies have implicated these same HERV families in MS, though this association remains controversial. Results Here, we explore the hypothesis that HTLV-1-infection results in the induction of HERV antigen expression and the elicitation of HERV-specific T-cells responses which, in turn, may be reactive against neurons and other tissues. PBMC from 15 HTLV-1-infected subjects, 5 of whom presented with HAM/TSP, were comprehensively screened for T-cell responses to overlapping peptides spanning HERV-K(HML-2) Gag and Env. In addition, we screened for responses to peptides derived from diverse HERV families, selected based on predicted binding to predicted optimal epitopes. We observed a lack of responses to each of these peptide sets. Conclusions Thus, although the limited scope of our screening prevents us from conclusively disproving our hypothesis, the current study does not provide data supporting a role for HERV-specific T-cell responses in HTLV-1 associated immunopathology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CONTEXT: Orthotopic liver transplantation is an excellent treatment approach for hepatocellular carcinoma in well-selected candidates. Nowadays some institutions tend to Expand the Milan Criteria including tumor with more than 5 cm and also associate with multiple tumors none larger than 3 cm in order to benefit more patients with the orthotopic liver transplantation. METHODS: The data collected were based on the online database PubMED. The key words applied on the search were "expanded Milan criteria" limited to the period from 2000 to 2009. We excluded 19 papers due to: irrelevance of the subject, lack of information and incompatibility of the language (English only). We compiled patient survival and tumor recurrence free rate from 1 to 5-years in patients with hepatocellular carcinoma submitted to orthotopic liver transplantation according to expanded the Milan criteria from different centers. RESULTS: Review compiled data from 23 articles. Fourteen different criteria were found and they are also described in detail, however the University of California - San Francisco was the most studied one among them. CONCLUSION: Expanded the Milan criteria is a useful attempt for widening the preexistent protocol for patients with hepatocellular carcinoma in waiting-list for orthotopic liver transplantation. However there is no significant difference in patient survival rate and tumor recurrence free rate from those patients that followed the Milan criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Approximately 20% JIA patients enters adulthood with clinically active disease and disabled, therefore work condition may be affected. Objectives To assess the prevalence of work disability among adult patients with JIA regularly attending a tertiary heumatology center and to determine possible associated risk factors. Methods This was a cross-sectional study that enrolled 43 JIA patients according to 2004 revised ILAR criteria. A questionnaire was developed in order to evaluate working status and labor activity: occupation, current/previous work, employment status and withdrawal rate were actively searched. Demographic data, JIA characteristics, clinical activity (DAS28>2.6), therapeutic intervention, comorbidities, physical activity, sedentarism (WHO definitions), functional class (1991 ACR criteria), HAQ and SF-36 were recorded. The prevalence of work disability was calculated using 95% confidence interval, and compared to all parameters; qualitative variables were analyzed using tests of association (chi-square test) and quantitative variables by Mann-Whitney or student test. Results Patients' mean age was 29+7.4 yrs (range 19-41) with mean JIA duration = 17.2+12.3 yrs (range 3-33); 63% were males and 37% females. JIA subtypes were 64% polyarticular, 11% oligoarticular, 9% systemic, 9% ERA, 2% extended oligoarticular, 2% psoriatic arthritis; 7% had uveitis. Serum RF was positive in 21% and ANA in 21%. The majority (72%, n = 31) of JIA patients were employed, whereas 28% (n = 12) were currently not working. In the latter group, 83% (10/12) were retired due to JIA related disability. Further analysis comparing those currently working vs. Those not working revealed similar age (25,3 yrs vs.29,5 yrs, p = 0,09). Although not significantly, most patients currently working had Poly onset JIA (22 vs. 6 p = 0,37), higher frequencies of good education level >12 yrs of school (31 vs.9, p = 0,38), functional class I (p = 0,96), practiced regular physical activity (9 vs. 0, p = 0,89), were singles (26 vs. 8, p = 0,15). Both groups had comparable HAQ and DAS 28 scores (0,62 vs. 0.59, p = 0,47 and 2,51 vs.2,07, p = 0,64) and similar arthroplasty rate (8 vs. 4, p = 0,427). Frequencies of hypertension (3 vs.1, p = 0,999), dyslipidemia (1 vs. 1, p = 0,125), diabetes (1 vs. 0 p = 0,999), depression (1 vs. 0, p = 0,999) and smokers (3 vs. 1, p = 0,99) were alike in both groups. Remarkably, employed patients had higher SF 36 mental health component (84.0 vs. 70.42, P = 0.01). Conclusion High prevalence of almost 1/3 work disability and of retirement due to disease related incapacity remain major problems for adult JIA individuals. We also identified worse mental health in employed patients indicating that further research is needed, in addition to intense affirmative disability actions in order to remove possible disabling barriers and to adapt restrictive environments for these patients. Moreover, enhanced strategies and policy for inclusion of JIA patients in the job market is urged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (â^ž), NLR (0.017), and Ef (99%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]The aim of this work is to study several strategies for the preservation of flow discontinuities in variational optical flow methods. We analyze the combination of robust functionals and diffusion tensors in the smoothness assumption. Our study includes the use of tensors based on decreasing functions, which has shown to provide good results. However, it presents several limitations and usually does not perform better than other basic approaches. It typically introduces instabilities in the computed motion fields in the form of independent \textit{blobs} of vectors with large magnitude...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.