853 resultados para Reflection and design
Resumo:
Three pavement design software packages were compared with regards to how they were different in determining design input parameters and their influences on the pavement thickness. StreetPave designs the concrete pavement thickness based on the PCA method and the equivalent asphalt pavement thickness. The WinPAS software performs both concrete and asphalt pavements following the AASHTO 1993 design method. The APAI software designs asphalt pavements based on pre-mechanistic/empirical AASHTO methodology. First, the following four critical design input parameters were identified: traffic, subgrade strength, reliability, and design life. The sensitivity analysis of these four design input parameters were performed using three pavement design software packages to identify which input parameters require the most attention during pavement design. Based on the current pavement design procedures and sensitivity analysis results, a prototype pavement design and sensitivity analysis (PD&SA) software package was developed to retrieve the pavement thickness design value for a given condition and allow a user to perform a pavement design sensitivity analysis. The prototype PD&SA software is a computer program that stores pavement design results in database that is designed for the user to input design data from the variety of design programs and query design results for given conditions. The prototype Pavement Design and Sensitivity Analysis (PA&SA) software package was developed to demonstrate the concept of retrieving the pavement design results from the database for a design sensitivity analysis. This final report does not include the prototype software which will be validated and tested during the next phase.
Resumo:
Man’s never-ending search for better materials and construction methods and for techniques of analysis and design has overcome most of the early difficulties of bridge building. Scour of the stream bed, however, has remained a major cause of bridge failures ever since man learned to place piers and abutments in the stream in order to cross wide rivers. Considering the overall complexity of field conditions, it is not surprising that no generally accepted principles (not even rules of thumb) for the prediction of scour around bridge piers and abutments have evolved from field experience alone. The flow of individual streams exhibits a manifold variation, and great disparity exists among different rivers. The alignment, cross section, discharge, and slope of a stream must all be correlated with the scour phenomenon, and this in turn must be correlated with the characteristics of the bed material ranging from clays and fine silts to gravels and boulders. Finally, the effect of the shape of the obstruction itself-the pier or abutment-must be assessed. Since several of these factors are likely to vary with time to some degree, and since the scour phenomenon as well is inherently unsteady, sorting out the influence of each of the various factors is virtually impossible from field evidence alone. The experimental approach was chosen as the investigative method for this study, but with due recognition of the importance of field measurements and with the realization that the results must be interpreted so as to be compatible with the present-day theories of fluid mechanics and sediment transportation. This approach was chosen because, on the one hand, the factors affecting the scour phenomenon can be controlled in the laboratory to an extent that is not possible in the field, and, on the other hand, the model technique can be used to circumvent the present inadequate understanding of the phenomenon of the movement of sediment by flowing water. In order to obtain optimum results from the laboratory study, the program was arranged at the outset to include a related set of variables in each of several phases into which the whole problem was divided. The phases thus selected were : 1. Geometry of piers and abutments, 2. Hydraulics of the stream, 3. Characteristics of the sediment, 4. Geometry of channel shape and alignment.
Resumo:
The approach to intervention programs varies depending on the methodological perspective adopted. This means that health professionals lack clear guidelines regarding how best to proceed, and it hinders the accumulation of knowledge. The aim of this paper is to set out the essential and common aspects that should be included in any program evaluation report, thereby providing a useful guide for the professional regardless of the procedural approach used. Furthermore, the paper seeks to integrate the different methodologies and illustrate their complementarity, this being a key aspect in terms of real intervention contexts, which are constantly changing. The aspects to be included are presented in relation to the main stages of the evaluation process: needs, objectives and design (prior to the intervention), implementation (during the intervention), and outcomes (after the intervention). For each of these stages the paper describes the elements on which decisions should be based, highlighting the role of empirical evidence gathered through the application of instruments to defined samples and according to a given procedure.
Resumo:
BACKGROUND: Over the years, somatic care has become increasingly specialized. Furthermore, a rising number of patients requiring somatic care also present with a psychiatric comorbidity. As a consequence, the time and resources needed to care for these patients can interfere with the course of somatic treatment and influence the patient-caregiver relationship. In the light of these observations, the Liaison Psychiatry Unit at the University Hospital in Lausanne (CHUV) has educated its nursing staff in order to strengthen its action within the general care hospital. What has been developed is a reflexive approach through supervision of somatic staff, in order to improve the efficiency of liaison psychiatry interventions with the caregivers in charge of patients. The kind of supervision we have developed is the result of a real partnership with somatic staff. Besides, in order to better understand the complexity of interactions between the two systems involved, the patient's and the caregivers', we use several theoretical references in an integrative manner. PSYCHOANALYTICAL REFERENCE: The psychoanalytical model allows us to better understand the dynamics between the supervisor and the supervised group in order to contain and give meaning to the affects arising in the supervision space. "Containing function" and "transitional phenomena" refer to the experience in which emotions can find a space where they can be taken in and processed in a secure and supportive manner. These concepts, along with that of the "psychic envelope", were initially developed to explain the psychological development of the baby in its early interactions with its mother or its surrogate. In the field of supervision, they allow us to be aware of these complex phenomena and the diverse qualities to which a supervisor needs to resort, such as attention, support and incentive, in order to offer a secure environment. SYSTEMIC REFERENCE: A new perspective of the patient's complexity is revealed by the group's dynamics. The supervisor's attention is mainly focused on the work of affects. However, these are often buried under a defensive shell, serving as a temporary protection, which prevents the caregiver from recognizing his or her own emotions, thereby enhancing the difficulties in the relationship with the patient. Whenever the work of putting emotions into words fail, we use "sculpting", a technique derived from the systemic model. Through the use of this type of analogical language, affects can emerge without constraint or feelings of danger. Through "playing" in that "transitional space", new exchanges appear between group members and allow new behaviors to be conceived. In practice, we ask the supervisee who is presenting a complex situation, to design a spatial representation of his or her understanding of the situation, through the display of characters significant to the situation: the patient, somatic staff members, relatives of the patient, etc. In silence, the supervisee shapes the characters into postures and arranges them in the room. Each sculpted character is identified, named, and positioned, with his or her gaze being set in a specific direction. Finally the sculptor shapes him or herself in his or her own role. When the sculpture is complete and after a few moments of fixation, we ask participants to express themselves about their experience. By means of this physical representation, participants to the sculpture discover perceptions and feelings that were unknown up to then. Hence from this analogical representation a reflection and hypotheses of understanding can arise and be developed within the group. CONCLUSION: Through the use of the concepts of "containing function" and "transitional space" we position ourselves in the scope of the encounter and the dialog. Through the use of the systemic technique of "sculpting" we promote the process of understanding, rather than that of explaining, which would place us in the position of experts. The experience of these encounters has shown us that what we need to focus on is indeed what happens in this transitional space in terms of dynamics and process. The encounter and the sharing of competencies both allow a new understanding of the situation at hand, which has, of course, to be verified in the reality of the patient-caregiver relationship. It is often a source of adjustment for interpersonal skills to recover its containing function in order to enable caregiver to better respond to the patient's needs.
Resumo:
The Cognitive Reflection Test (CRT) is a test introduced by S. Frederick (2005) Cognitive reflection and decision making, J Econ Perspect 19(4): 25-42. The task is designed to measure the tendency to override an intuitive response that is incorrect and to engage in further reflection that leads to the correct response. The consistent sex differences in CRT performance may suggest a role for gonadal hormones, particularly testosterone. A now widely studied putative marker for fetal testosterone is the second-to-fourth digit ratio (2D:4D). This paper tests to what extent 2D:4D, as a proxy for prenatal exposure to testosterone, can predict CRT scores in a sample of 623 students. After controlling for sex, we observe that a lower 2D:4D (reflecting a higher exposure to testosterone) is significantly associated with a higher number of correct answers. The result holds for both hands? 2D:4Ds. In addition, the effect appears to be sharper for females than for males. We also control for patience and math proficiency, which are significantly related to performance in the CRT. But the effect of 2D:4D on performance in CRT is not reduced with these controls, implying that these variables are not mediating the relationship between digit ratio and CRT.
Resumo:
SUMMARY : The recognition by recipient T cells of the allograft major histocompatibility complex (MHC)mismatched antigens is the primary event that ultimately leads to rejection. In the transplantation setting, circulating alloreactive CD4+ T cells play a central role in the initiation and the coordination of the immune response and can initiate the rejection of an allograft via three distinct pathways: the direct, indirect and the recently described semi-direct pathway. However, the exact role of individual CD4+ T-cell subsets in the development of allograft rejection is not clearly defined. Furthermore, besides pathogenic effector T cells, a new subset of T cells with regulatory properties, the CD4+CD25+Foxp3+ (Treg) cells, has come under increased scrutiny over the last decade. The experiments presented in this thesis were designed to better define the phenotype and functional characteristics of CD4+ T-cell subsets and Treg cells in vitro and in vivo in a marine adoptive transfer and skin transplantation model. As Treg cells play a key role in the induction and maintenance of peripheral transplantation tolerance, we have explored whether donor-antigen specific Treg cells could be expanded in vitro. Here we describe a robust protocol for the ex-vivo generation and expansion of antigen-specific Treg cells, without loss of their characteristic phenotype and suppressive function. In our in vivo transplantation model, antigen-specific Treg cells induced donor-specific tolerance to skin allografts in lymphopenic recipients and significantly delayed skin graft rejection in wild-type mice in the absence of any other immunosuppression. Naïve and memory CD4+ T cells have distinct phenotypes, effector functions and in vivo homeostatsis, and thus may play different roles in anti-donor immunity after transplantation. We have analyzed in vitro and in vivo primary alloresponses of naïve and cross-reactive memory CD4+ T cells. We found that the CD4+CD45RBlo memory T-cell pool was heterogeneous and contained cells with regulatory potentials, both in the CD4+CD25+ and CD4+CD25- populations. CD4+ T cells capable of inducing strong primary alloreactive responses in vitro and rejection of a first allograft in vivo were mainly contained within the CD45RBhi naïve CD4+ T-cell compartment. Taken together, the work described in this thesis provides new insights into the mechanisms that drive allograft rejection or donor-specific transplantation tolerance. These results will help to optimise current clinical immunosuppressive regimens used after solid organ transplantation and design new immunotherapeutic strategies to prevent transplant rejection. RÉSUMÉ : ROLE DES SOUS-POPULATIONS DE CELLULES T DANS LE REJET DE GREFFE ET L'INDUCTION DE TOLERANCE EN TRANSPLANTATION La reconnaissance par les cellules T du receveur des alloantigènes du complexe majeur d'histocompatibilité (CMIT) présentés par une greffe allogénique, est le premier événement qui aboutira au rejet de l'organe greffé. Dans le contexte d'une transplantation, les cellules alloréactives T CD4+ circulantes jouent un rôle central dans l'initiation et la coordination de 1a réponse immune, et peuvent initier le rejet par 3 voies distinctes : la voie directe, indirecte et la voie servi-directe, plus récemment décrite. Toutefois, le rôle exact des sous-populations de cellules T CD4+ dans les différentes étapes menant au rejet d'une allogreffe n'est pas clairement établi. Par ailleurs, hormis les cellules T effectrices pathogéniques, une sous-population de cellules T ayant des propriétés régulatrices, les cellules T CD4+CD25+Foxp3+ (Treg), a été nouvellement décrite et est intensément étudiée depuis environ dix ans. Les expériences présentées dans cette thèse ont été planifiées afin de mieux définir le phénotype et les caractéristiques fonctionnels des sous-populations de cellules T CD4+ et des Treg in vitro et in vivo dans un modèle marin de transfert adoptif de cellules et de transplantation de peau. Comme les cellules Treg jouent un rôle clé dans l'induction et le maintien de la tolérance périphérique en transplantation, nous avons investigué la possibilité de multiplier in vitro des cellules Treg avec spécificité antigénique pour le donneur. Nous décrivons ici un protocole reproductible pour la génération et l'expansion ex-vivo de cellules Treg avec spécificité antigénique, sans perte de leur phénotype caractéristique et de leur fonction suppressive. Dans notre modèle in vivo de transplantation de peau, ces cellules Treg pouvaient induire une tolérance spécifique vis-à-vis du donneur chez des souris lymphopéniques, et, chez des souris normales non-lymphopéniques ces Treg ont permis de retarder significativement le rejet en l'absence de tout traitement immunosuppresseur. Les cellules T CD4+ naïves et mémoires se distinguent par leur phénotype, fonction effectrice et leur homéostasie in vivo, et peuvent donc moduler différemment la réponse immune contre le donneur après transplantation. Nous avons analysé in vitro et in vivo les réponses allogéniques primaires de cellules T CD4+ naïves et mémoires non-spécifiques (cross-réactives). Nos résultats ont montré que le pool de cellules T CD4+CD45RB'° mémoires était hétérogène et contenait des cellules avec un potentiel régulateur, aussi bien parmi la sous-population de cellules CD4+CD25+ que CD4+CD25+. Les cellules T CD4+ capables d'induire une alloréponse primaire intense in vitro et le rejet d'une première allogreffe in vivo étaient essentiellement contenues dans le pool de cellules T CD4+CD45RBhi naïves. En conclusion, le travail décrit dans cette thèse amène un nouvel éclairage sur les mécanismes responsables du rejet d'une allogreffe ou de l'induction de tolérance en transplantation. Ces résultats permettront d'optimaliser les traitements immunosuppresseurs utilisés en transplantation clinique et de concevoir des nouvelles stratégies irnmuno-thérapeutiques pour prévenir le rejet de greffe allogénique.
Resumo:
Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.
Resumo:
Understanding molecular recognition is one major requirement for drug discovery and design. Physicochemical and shape complementarity between two binding partners is the driving force during complex formation. In this study, the impact of shape within this process is analyzed. Protein binding pockets and co-crystallized ligands are represented by normalized principal moments of inertia ratios (NPRs). The corresponding descriptor space is triangular, with its corners occupied by spherical, discoid, and elongated shapes. An analysis of a selected set of sc-PDB complexes suggests that pockets and bound ligands avoid spherical shapes, which are, however, prevalent in small unoccupied pockets. Furthermore, a direct shape comparison confirms previous studies that on average only one third of a pocket is filled by its bound ligand, supplemented by a 50 % subpocket coverage. In this study, we found that shape complementary is expressed by low pairwise shape distances in NPR space, short distances between the centers-of-mass, and small deviations in the angle between the first principal ellipsoid axes. Furthermore, it is assessed how different binding pocket parameters are related to bioactivity and binding efficiency of the co-crystallized ligand. In addition, the performance of different shape and size parameters of pockets and ligands is evaluated in a virtual screening scenario performed on four representative targets.
Resumo:
Concomitant aortic and mitral valve replacement or concomitant aortic valve replacement and mitral repair can be a challenge for the cardiac surgeon: in particular, because of their structure and design, two bioprosthetic heart valves or an aortic valve prosthesis and a rigid mitral ring can interfere at the level of the mitroaortic junction. Therefore, when a mitral bioprosthesis or a rigid mitral ring is already in place and a surgical aortic valve replacement becomes necessary, or when older high-risk patients require concomitant mitral and aortic procedures, the new 'fast-implantable' aortic valve system (Intuity valve, Edwards Lifesciences, Irvine, CA, USA) can represent a smart alternative to standard aortic bioprosthesis. Unfortunately, this is still controversial (risk of interference). However, transcatheter aortic valve replacements have been performed in patients with previously implanted mitral valves or mitral rings. Interestingly, we learned that there is no interference (or not significant interference) among the standard valve and the stent valve. Consequently, we can assume that a fast-implantable valve can also be safely placed next to a biological mitral valve or next to a rigid mitral ring without risks of distortion, malpositioning, high gradient or paravalvular leak. This paper describes two cases: a concomitant Intuity aortic valve and bioprosthetic mitral valve implantation and a concomitant Intuity aortic valve and mitral ring implantation.
Resumo:
The goal of this paper is to describe a complete and extensive prototype design of a fixed electrical attenuator. The paper starts by describing the function and by giving some basic information about the attenuators. After a comprehensive description of the component, the facts of reverse engineering are discussed. The method itself is applied to ease manufacturing and design stages of this component. Information about materials and applied manufacturing technologies are also included in this report. By applying some specified DFMA-aspects the final design turned out to be a potential prototype device to be manufactured and for further analyse.
Resumo:
This thesis considers aspects related to the design and standardisation of transmission systems for wireless broadcasting, comprising terrestrial and mobile reception. The purpose is to identify which factors influence the technical decisions and what issues could be better considered in the design process in order to assess different use cases, service scenarios and end-user quality. Further, the necessity of cross-layer optimisation for efficient data transmission is emphasised and means to take this into consideration are suggested. The work is mainly related terrestrial and mobile digital video broadcasting systems but many of the findings can be generalised also to other transmission systems and design processes. The work has led to three main conclusions. First, it is discovered that there are no sufficiently accurate error criteria for measuring the subjective perceived audiovisual quality that could be utilised in transmission system design. Means for designing new error criteria for mobile TV (television) services are suggested and similar work related to other services is recommended. Second, it is suggested that in addition to commercial requirements there should be technical requirements setting the frame work for the design process of a new transmission system. The technical requirements should include the assessed reception conditions, technical quality of service and service functionalities. Reception conditions comprise radio channel models, receiver types and antenna types. Technical quality of service consists of bandwidth, timeliness and reliability. Of these, the thesis focuses on radio channel models and errorcriteria (reliability) as two of the most important design challenges and provides means to optimise transmission parameters based on these. Third, the thesis argues that the most favourable development for wireless broadcasting would be a single system suitable for all scenarios of wireless broadcasting. It is claimed that there are no major technical obstacles to achieve this and that the recently published second generation digital terrestrial television broadcasting system provides a good basis. The challenges and opportunities of a universal wireless broadcasting system are discussed mainly from technical but briefly also from commercial and regulatory aspect
Resumo:
This study investigates the transformation of practical teaching in a Catalan school, connected to the design, implementation and development of project-based learning, and focusing on dialogic learning to investigate its limits and possibilities. Qualitative and design-based research (DBR) methods are applied. These methods are based on empirical educational research with the theory-driven of learning environments. DBR is proposed and applied using practical guidance for the teachers of the school. It can be associated with the current proposals for Embedding Social Sciences and Humanities in the Horizon 2020 Societal Challenges. This position statement defends the social sciences and the humanities as the most fundamental and important ideas to face all societal challenges. The results of this study show that before the training process, teachers apply dialogic learning in specific moments (for example, when they speak about the weekend); however, during the process and after the process, they work systematically with dialogic learning through the PEPT: they start and finish every activity with a individual and group reflection about their own processes, favouring motivation, reasoning and the implication of all the participants. These results prove that progressive transformations of teaching practice benefit cooperative work in class
Resumo:
Validation and verification operations encounter various challenges in product development process. Requirements for increasing the development cycle pace set new requests for component development process. Verification and validation usually represent the largest activities, up to 40 50 % of R&D resources utilized. This research studies validation and verification as part of case company's component development process. The target is to define framework that can be used in improvement of the validation and verification capability evaluation and development in display module development projects. Validation and verification definition and background is studied in this research. Additionally, theories such as project management, system, organisational learning and causality is studied. Framework and key findings of this research are presented. Feedback system according of the framework is defined and implemented to the case company. This research is divided to the theory and empirical parts. Theory part is conducted in literature review. Empirical part is done in case study. Constructive methode and design research methode are used in this research A framework for capability evaluation and development was defined and developed as result of this research. Key findings of this study were that double loop learning approach with validation and verification V+ model enables defining a feedback reporting solution. Additional results, some minor changes in validation and verification process were proposed. There are a few concerns expressed on the results on validity and reliability of this study. The most important one was the selected research method and the selected model itself. The final state can be normative, the researcher may set study results before the actual study and in the initial state, the researcher may describe expectations for the study. Finally reliability of this study, and validity of this work are studied.
Resumo:
Introduction. This study presents the results of the implementation process of portfolio in the course of four consecutive years. The plan includes three phases (initiation, development and consolidation). The sample is 480 students studying the first year of nursing at the University of Girona. The objective is to evaluate the effectiveness of the instrument and achieve its construction in a self-regulated process. Subjects and methods. The proposed methodology is based on the sequential triangulation between methods. The study of the same empirical unit it’s used two investigation strategies, quantitative and qualitative. Study 1: quantitative, descriptive, longitudinal and prospective. The statistical analysis of paired data for continuous variables that follow a normaldistribution is made with t Student-Fisher test. The correlation between two numerical variables is used the Pearson correlation index. Study 2: qualitative, uses the discussion groups and topics. For textual data analysis is used Atlas.ti. Results. The final score for students who prepare the portfolio is higher (7.78) than the score who do not prepare (7) (p ≤ 0.001). A significant correlation exists between the portfolio score and final score (p ≤ 0.001). The trend study showsa greater sensitivity of the instrument assessment. Conclusion. The final design of the portfolio is characterized by mixed, flexible and encourages the student reflection and empowers the reflection on the continuum of learning
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.