923 resultados para Wooffitt, Robin: Conversation analysis. Principles, practices and application
Resumo:
Abstract. Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characteriza- tion and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is com- bined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS in- strumental error is small enough to enable detection of pre- cursory displacements of millimetric magnitude. This con- sists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Dis- placement measurement are improved considerably by ap- plying Nearest Neighbour (NN) averaging, which reduces the error (1σ ) up to a factor of 6. This technique was ap- plied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumen- tal error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by apply- ing the NN averaging method. These results show that mil- limetric displacements prior to failure can be detected using TLS.
Resumo:
Selected configuration interaction (SCI) for atomic and molecular electronic structure calculations is reformulated in a general framework encompassing all CI methods. The linked cluster expansion is used as an intermediate device to approximate CI coefficients BK of disconnected configurations (those that can be expressed as products of combinations of singly and doubly excited ones) in terms of CI coefficients of lower-excited configurations where each K is a linear combination of configuration-state-functions (CSFs) over all degenerate elements of K. Disconnected configurations up to sextuply excited ones are selected by Brown's energy formula, ΔEK=(E-HKK)BK2/(1-BK2), with BK determined from coefficients of singly and doubly excited configurations. The truncation energy error from disconnected configurations, Δdis, is approximated by the sum of ΔEKS of all discarded Ks. The remaining (connected) configurations are selected by thresholds based on natural orbital concepts. Given a model CI space M, a usual upper bound ES is computed by CI in a selected space S, and EM=E S+ΔEdis+δE, where δE is a residual error which can be calculated by well-defined sensitivity analyses. An SCI calculation on Ne ground state featuring 1077 orbitals is presented. Convergence to within near spectroscopic accuracy (0.5 cm-1) is achieved in a model space M of 1.4× 109 CSFs (1.1 × 1012 determinants) containing up to quadruply excited CSFs. Accurate energy contributions of quintuples and sextuples in a model space of 6.5 × 1012 CSFs are obtained. The impact of SCI on various orbital methods is discussed. Since ΔEdis can readily be calculated for very large basis sets without the need of a CI calculation, it can be used to estimate the orbital basis incompleteness error. A method for precise and efficient evaluation of ES is taken up in a companion paper
Resumo:
The right to be treated humanely when detained is universally recognized. Deficiencies in detention conditions and violence, however, subvert this right. When this occurs, proper medico-legal investigations are critical irrespective of the nature of death. Unfortunately, the very context of custody raises serious concerns over the effectiveness and fairness of medico-legal examinations. The aim of this manuscript is to identify and discuss the practical and ethical difficulties encountered in the medico-legal investigation following deaths in custody. Data for this manuscript come from a larger project on Death in Custody that examined the causes of deaths in custody and the conditions under which these deaths should be investigated and prevented. A total of 33 stakeholders from forensic medicine, law, prison administration or national human rights administration were interviewed. Data obtained were analyzed qualitatively. Forensic experts are an essential part of the criminal justice process as they offer evidence for subsequent indictment and eventual punishment of perpetrators. Their independence when investigating a death in custody was deemed critical and lack thereof, problematic. When experts were not independent, concerns arose in relation to conflicts of interest, biased perspectives, and low-quality forensic reports. The solutions to ensure independent forensic investigations of deaths in custody must be structural and simple: setting binding standards of practice rather than detailed procedures and relying on preexisting national practices as opposed to encouraging new practices that are unattainable for countries with limited resources.
Resumo:
Now more than ever Capeverdean teachers are faced with student acts of violence which reverberate to the outer limits of our society at large. These situations have caused teachers to reflect upon their roles as educator and “promoter” of discipline. Some are even questioning their effectiveness in this process as they are challenged to discover if they are in the right place. How do teachers manage discipline? In the past, punishment was the rule of thumb and to some extent remains the measure of choice. But is this the most effective form of behavior management? What is the relationship between punishment and corrective behavior? This paper will discuss the affect that punishment has on the learning process as it attempts to suggest strategies in managing discipline with the objective of creating an effective learning environment.
Resumo:
MHC class II-peptide multimers are important tools for the detection, enumeration and isolation of antigen-specific CD4+ Τ cells. However, their erratic and often poor performance impeded their broad application and thus in-depth analysis of key aspects of antigen-specific CD4+ Τ cell responses. In the first part of this thesis we demonstrate that a major cause for poor MHC class II tetramer staining performance is incomplete peptide loading on MHC molecules. We observed that peptide binding affinity for "empty" MHC class II molecules poorly correlates with peptide loading efficacy. Addition of a His-tag or desthiobiotin (DTB) at the peptide N-terminus allowed us to isolate "immunopure" MHC class II-peptide monomers by affinity chromatography; this significantly, often dramatically, improved tetramer staining of antigen-specific CD4+ Τ cells. Insertion of a photosensitive amino acid between the tag and the peptide, permitted removal of the tag from "immunopure" MHC class II-peptide complex by UV irradiation, and hence elimination of its potential interference with TCR and/or MHC binding. Moreover, to improve loading of self and tumor antigen- derived peptides onto "empty" MHC II molecules, we first loaded these with a photocleavable variant of the influenza A hemagglutinin peptide HA306-318 and subsequently exchanged it with a poorly loading peptide (e.g. NY-ESO-1119-143) upon photolysis of the conditional ligand. Finally, we established a novel type of MHC class II multimers built on reversible chelate formation between 2xHis-tagged MHC molecules and a fluorescent nitrilotriacetic acid (NTA)-containing scaffold. Staining of antigen-specific CD4+ Τ cells with "NTAmers" is fully reversible and allows gentle cell sorting. In the second part of the thesis we investigated the role of the CD8α transmembrane domain (TMD) for CD8 coreceptor function. The sequence of the CD8α TMD, but not the CD8β TMD, is highly conserved and homodimerizes efficiently. We replaced the CD8α TMD with the one of the interleukin-2 receptor a chain (CD8αTac) and thus ablated CD8α TMD interactions. We observed that ΤΙ Τ cell hybridomas expressing CD8αTacβ exhibited severely impaired intracellular calcium flux, IL-2 responses and Kd/PbCS(ABA) P255A tetramer binding. By means of fluorescence resonance energy transfer experiments (FRET) we established that CD8αTacβ associated with TCR:CD3 considerably less efficiently than CD8αβ, both in the presence and the absence of Kd/PbCS(ABA) complexes. Moreover, we observed that CD8αTacβ partitioned substantially less in lipid rafts, and related to this, associated less efficiently with p56Lck (Lck), a Src kinase that plays key roles in TCR proximal signaling. Our results support the view that the CD8α TMD promotes the formation of CD8αβP-CD8αβ dimers on cell surfaces. Because these contain two CD8β chains and that CD8β, unlike CD8α, mediates association of CD8 with TCR:CD3 as well as with lipid rafts and hence with Lck, we propose that the CD8αTMD plays an important and hitherto unrecognized role for CD8 coreceptor function, namely by promoting CD8αβ dimer formation. We discuss what implications this might have on TCR oligomerization and TCR signaling. - Les multimères de complexes MHC classe II-peptide sont des outils importants pour la détection, le dénombrement et l'isolation des cellules Τ CD4+ spécifiques pour un antigène d'intérêt. Cependant, leur performance erratique et souvent inadéquate a empêché leur utilisation généralisée, limitant ainsi l'analyse des aspects clés des réponses des lymphocytes Τ CD4+. Dans la première partie de cette thèse, nous montrons que la cause principale de la faible efficacité des multimères de complexes MHC classe II-peptide est le chargement incomplet des molécules MHC par des peptides. Nous montrons également que l'affinité du peptide pour la molécule MHC classe II "vide" n'est pas nécessairement liée au degré du chargement. Grâce à l'introduction d'une étiquette d'histidines (His-tag) ou d'une molécule de desthiobiotine à l'extrémité N-terminale du peptide, des monomères MHC classe II- peptide dits "immunopures" ont pu être isolés par chromatographic d'affinité. Ceci a permis d'améliorer significativement et souvent de façon spectaculaire, le marquage des cellules Τ CD4+ spécifiques pour un antigène d'intérêt. L'insertion d'un acide aminé photosensible entre l'étiquette et le peptide a permis la suppression de l'étiquette du complexe MHC classe- Il peptide "immunopure" par irradiation aux UV, éliminant ainsi de potentielles interférences de liaison au TCR et/ou au MHC. De plus, afin d'améliorer le chargement des molécules MHC classe II "vides" avec des peptides dérivés d'auto-antigènes ou d'antigènes tumoraux, nous avons tout d'abord chargé les molécules MHC "vides" avec un analogue peptidique photoclivable issu du peptide HA306-318 de l'hémagglutinine de la grippe de type A, puis, sous condition de photolyse, nous l'avons échangé avec de peptides à chargement faible (p.ex. NY-ESO-1119-143). Finalement, nous avons construit un nouveau type de multimère réversible, appelé "NTAmère", basé sur la formation chélatante reversible entre les molécules MHC-peptide étiquettés par 2xHis et un support fluorescent contenant des acides nitrilotriacetiques (NTA). Le marquage des cellules Τ CD4+ spécifiques pour un antigène d'intérêt avec les "NTAmères" est pleinement réversible et permet également un tri cellulaire plus doux. Dans la deuxième partie de cette thèse nous avons étudié le rôle du domaine transmembranaire (TMD) du CD8α pour la fonction coréceptrice du CD8. La séquence du TMD du CD8α, mais pas celle du TMD du CD8β, est hautement conservée et permet une homodimérisation efficace. Nous avons remplacé le TMD du CD8α avec celui de la chaîne α du récepteur à l'IL-2 (CD8αTac), éliminant ainsi les interactions du TMD du CD8α. Nous avons montré que les cellules des hybridomes Τ T1 exprimant le CD8αTacβ présentaient une atteinte sévère du flux du calcium intracellulaire, des réponses d'IL-2 et de la liaison des tétramères Kd/PbCS(ABA) P255A. Grâce aux expériences de transfert d'énergie entre molécules fluorescentes (FRET), nous avons montré que l'association du CD8αTacβ avec le TCR:CD3 est considérablement moins efficace qu'avec le CD8αβ, et ceci aussi bien en présence qu'en absence de complexes Kd/PbCS(ABA). De plus, nous avons observé que le CD8αTacβ se distribuait beaucoup moins bien dans les radeaux lipidiques, engendrant ainsi, une association moins efficace avec p56Lck (Lck), une kinase de la famille Src qui joue un rôle clé dans la signalisation proximale du TCR. Nos résultats soutiennent l'hypothèse que le TMD du CD8αβ favorise la formation des dimères de CD8αβ à la surface des cellules. Parce que ces derniers contiennent deux chaînes CD8β et que CD8β, contrairement à CD8α, favorise l'association du CD8 au TCR:CD3 aussi bien qu'aux radeaux lipidiques et par conséquent à Lck, nous proposons que le TMD du CD8α joue un rôle important, jusqu'alors inconnu, pour la fonction coreceptrice du CD8, en encourageant la formation des dimères CD8αβ. Nous discutons des implications possibles sur l'oligomerisation du TCR et la signalisation du TCR.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
This paper presents findings from a study investigating a firm s ethical practices along the value chain. In so doing we attempt to better understand potential relationships between a firm s ethical stance with its customers and those of its suppliers within a supply chain and identify particular sectoral and cultural influences that might impinge on this. Drawing upon a database comprising of 667 industrial firms from 27 different countries, we found that ethical practices begin with the firm s relationship with its customers, the characteristics of which then influence the ethical stance with the firm s suppliers within the supply chain. Importantly, market structure along with some key cultural characteristics were also found to exert significant influence on the implementation of ethical policies in these firms.
Resumo:
The discipline of Enterprise Architecture Management (EAM) deals with the alignment of business and information systems architectures. While EAM has long been regarded as a discipline for IT managers this book takes a different stance: It explains how top executives can use EAM for leveraging their strategic planning and controlling processes and how EAM can contribute to sustainable competitive advantage. Based on the analysis of best practices from eight leading European companies from various industries the book presents crucial elements of successful EAM. It outlines what executives need to do in terms of governance, processes, methodologies and culture in order to bring their management to the next level. Beyond this, the book points how EAM might develop in the next decade allowing today's managers to prepare for the future of architecture management.
Resumo:
Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.
Resumo:
BACKGROUND: Studies on hexaminolevulinate (HAL) cystoscopy report improved detection of bladder tumours. However, recent meta-analyses report conflicting effects on recurrence. OBJECTIVE: To assess available clinical data for blue light (BL) HAL cystoscopy on the detection of Ta/T1 and carcinoma in situ (CIS) tumours, and on tumour recurrence. DESIGN, SETTING, AND PARTICIPANTS: This meta-analysis reviewed raw data from prospective studies on 1345 patients with known or suspected non-muscle-invasive bladder cancer (NMIBC). INTERVENTION: A single application of HAL cystoscopy was used as an adjunct to white light (WL) cystoscopy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: We studied the detection of NMIBC (intention to treat [ITT]: n=831; six studies) and recurrence (per protocol: n=634; three studies) up to 1 yr. DerSimonian and Laird's random-effects model was used to obtain pooled relative risks (RRs) and associated 95% confidence intervals (CIs) for outcomes for detection. RESULTS AND LIMITATIONS: BL cystoscopy detected significantly more Ta tumours (14.7%; p<0.001; odds ratio [OR]: 4.898; 95% CI, 1.937-12.390) and CIS lesions (40.8%; p<0.001; OR: 12.372; 95% CI, 6.343-24.133) than WL. There were 24.9% patients with at least one additional Ta/T1 tumour seen with BL (p<0.001), significant also in patients with primary (20.7%; p<0.001) and recurrent cancer (27.7%; p<0.001), and in patients at high risk (27.0%; p<0.001) and intermediate risk (35.7%; p=0.004). In 26.7% of patients, CIS was detected only by BL (p<0.001) and was also significant in patients with primary (28.0%; p<0.001) and recurrent cancer (25.0%; p<0.001). Recurrence rates up to 12 mo were significantly lower overall with BL, 34.5% versus 45.4% (p=0.006; RR: 0.761 [0.627-0.924]), and lower in patients with T1 or CIS (p=0.052; RR: 0.696 [0.482-1.003]), Ta (p=0.040; RR: 0.804 [0.653-0.991]), and in high-risk (p=0.050) and low-risk (p=0.029) subgroups. Some subgroups had too few patients to allow statistically meaningful analysis. Heterogeneity was minimised by the statistical analysis method used. CONCLUSIONS: This meta-analysis confirms that HAL BL cystoscopy significantly improves the detection of bladder tumours leading to a reduction of recurrence at 9-12 mo. The benefit is independent of the level of risk and is evident in patients with Ta, T1, CIS, primary, and recurrent cancer.
Resumo:
This article summarizes the basic principles of photoelectron spectroscopy for surface analysis, with examples of applications in material science that illustrate the capabilities of the related techniques.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
Meta-analyses are considered as an important pillar of evidence-based medicine. The aim of this review is to describe the main principles of a meta-analysis and to use examples of head and neck oncology to demonstrate their clinical impact and methodological interest. The major role of individual patient data is outlined, as well as the superiority of individual patient data over meta-analyses based on published summary data. The major clinical breakthrough of head and neck meta-analyses are summarized, regarding concomitant chemotherapy, altered fractionated chemotherapy, new regimens of induction chemotherapy or the use of radioprotectants. Recent methodological developments are described, including network meta-analyses, the validation of surrogate markers. Lastly, the future of meta-analyses is discussed in the context of personalized medicine.