864 resultados para Limits of Judicial Interpretation
Resumo:
The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.
Resumo:
Protein conformations and dynamics can be studied by nuclear magnetic resonance spectroscopy using dilute liquid crystalline samples. This work clarifies the interpretation of residual dipolar coupling data yielded by the experiments. It was discovered that unfolded proteins without any additional structure beyond that of a mere polypeptide chain exhibit residual dipolar couplings. Also, it was found that molecular dynamics induce fluctuations in the molecular alignment and doing so affect residual dipolar couplings. The finding clarified the origins of low order parameter values observed earlier. The work required the development of new analytical and computational methods for the prediction of intrinsic residual dipolar coupling profiles for unfolded proteins. The presented characteristic chain model is able to reproduce the general trend of experimental residual dipolar couplings for denatured proteins. The details of experimental residual dipolar coupling profiles are beyond the analytical model, but improvements are proposed to achieve greater accuracy. A computational method for rapid prediction of unfolded protein residual dipolar couplings was also developed. Protein dynamics were shown to modulate the effective molecular alignment in a dilute liquid crystalline medium. The effects were investigated from experimental and molecular dynamics generated conformational ensembles of folded proteins. It was noted that dynamics induced alignment is significant especially for the interpretation of molecular dynamics in small, globular proteins. A method of correction was presented. Residual dipolar couplings offer an attractive possibility for the direct observation of protein conformational preferences and dynamics. The presented models and methods of analysis provide significant advances in the interpretation of residual dipolar coupling data from proteins.
Resumo:
In this work, we investigate the intrinsic limits of subthreshold slope in a dual gated bilayer graphene transistor using a coupled self-consistent Poisson-bandstructure solver. We benchmark the solver by matching the bias dependent band gap results obtained from the solver against published experimental data. We show that the intrinsic bias dependence of the electronic structure and the self-consistent electrostatics limit the subthreshold slope obtained in such a transistor well above the Boltzmann limit of 60 mV/decade at room temperature, but much below the results experimentally shown till date, indicating room for technological improvement of bilayer graphene.
Resumo:
This paper deals with the simulation-driven study of the impact of hardened steel projectiles on thin aluminium target plates using explicit finite element analysis as implemented in LS-DYNA. The evaluation of finite element modelling includes a comprehensive mesh convergence study using shell elements for representing target plates and the solid element-based representation of ogivalnosed projectiles. A user-friendly automatic contact detection algorithm is used for capturing interaction between the projectile and the target plate. It is shown that the proper choice of mesh density and strain rate-dependent material properties is crucial as these parameters significantly affect the computed residual velocity. The efficacy of correlation with experimental data is adjudged in terms of a 'correlation index' defined in the present study for which values close to unity are desirable.By simulating laboratory impact tests on thin aluminium plates carried out by earlier investigators, extremely good prediction of experimental ballistic limits has been observed with correlation indices approaching unity. Additional simulation-based parametric studies have been carried out and results consistent with test data have been obtained. The simulation procedures followed in the present study can be applied with confidence in designing thin aluminium armour plates for protection against low calibre projectiles.
Resumo:
Metallic glasses are of interest because of their mechanical properties. They are ductile as well as brittle. This is true of Pd77.5Cu6Si16.5, a ternary glassy alloy. Actually, the most stable metallic glasses are those which are alloys of noble or transition metals A general formula is postulated as T70–80G30-20where T stands for one or several 3d transition elements, and includes the metalloid glass formers. Another general formula is A3B to A5B where B is a metalloid. A computer method utilising the MIGAP computer program of Kaufman is used to calculate the miscibility gap over a range of temperatures. The precipitation of a secondary crystalline phase is postulated around 1500K. This could produce a dispersed phase composite with interesting high temperature-strength properties.
Resumo:
The quaternary system Sb1bTe1bBi1bSe with small amounts of suitable dopants is of interest for the manufacture of thermoelectric modules which exhibit the Peltier and Seebeck effects. This property could be useful in the production of energy from the thermoelectric effect. Other substances are bismuth telluride (Bi2Te3) and Sb1bTe1bBi and compounds such as ZnIn2Se4. In the present paper the application of computer programs such as MIGAP of Kaufman is used to indicate the stability of the ternary limits of Sb1bTe1bBi within the temperature ranges of interest, namely 273 K to 300 K.
Resumo:
This study discusses legal interpretation. The question is how legal texts, for instance laws, statutes and regulations, can and do have meaning. Language makes interpretation difficult as it holds no definite meanings. When the theoretical connection between semantics and legal meaning is loosened and we realise that language cannot be a means of justifying legal decisions, the responsibility inherent in legal interpretation can be seen in full. We are thus compelled to search for ways to analyse this responsibility. The main argument of the book is that the responsibility of legal interpretation contains a responsibility towards the text that is interpreted (and through the mediation of the text also towards the legal system), but not only this. It is not simply a responsibility to read and read well, but it transcends on a broader scale. It includes responsibility for the effects of the interpretation in a particular situation and with regard to the people whose case is decided. Ultimately, it is a responsibility to do justice. These two aspects of responsibility are conceptualised here as the two dimensions of the ethics of legal interpretation: the textual and the situational. The basic conception of language presented here is provided by Ludwig Wittgenstein s later philosophy, but the argument is not committed to only one philosophical tradition. Wittgenstein can be counterpointed in interesting ways by Jacques Derrida s ideas on language and meaning. Derrida s work also functions as a contrast to hermeneutic theories. It is argued that the seed to an answer to the question of meaning lies in the inter-personal and situated activity of interpretation and communication, an idea that can be discerned in different ways in the works of Wittgenstein, Derrida and Hans-Georg Gadamer. This way the question of meaning naturally leads us to think about ethics, which is approached here through the philosophy of Emmanuel Levinas. His thinking, focusing on topics such as otherness, friendship and hospitality, provides possibilities for answering some of the questions posed in this book. However, at the same time we move inside a normativity where ethics and politics come together in many ways. The responsibility of legal interpretation is connected to the political and this has to be acknowledged lest we forget that law always implies force. But it is argued here that the political can be explored in positive terms as it does not have to mean only power or violence.
Resumo:
The dissertation examines the role of the EU courts in new governance. New governance has raised unprecedented interest in the EU in recent years. This is manifested in a plethora of instruments and actors at various levels that challenge more traditional forms of command-and-control regulation. New governance and political experimentation more generally is thought to sap the ability of the EU judiciary to monitor and review these experiments. The exclusion of the courts is then seen to add to the legitimacy problem of new governance. The starting point of this dissertation is the observation that the marginalised role of the courts is based on theoretical and empirical assumptions which invite scrutiny. The theoretical framework of the dissertation is deliberative democracy and democratic experimentalism. The analysis of deliberative democracy is sustained by an attempt to apply theoretical concepts to three distinctive examples of governance in the EU. These are the EU Sustainable Development Strategy, the European Chemicals Agency, and the Common Implementation Strategy for the Water Framework Directive. The case studies show numerous disincentives and barriers to judicial review. Among these are questions of the role of courts in shaping governance frameworks, the reviewability of science-based measures, the standing of individuals before the courts, and the justiciability of soft law. The dissertation analyses the conditions of judicial review in each governance environment and proposes improvements. From a more theoretical standpoint it could be said that each case study presents a governance regime which builds on legislation that lays out major (guide)lines but leaves details to be filled out at a later stage. Specification of detailed standards takes place through collaborative networks comprising members from national administrations, NGOs, and the Commission. Viewed this way, deliberative problem-solving is needed to bring people together to clarify, elaborate, and revise largely abstract and general norms in order to resolve concrete and specific problems and to make law applicable and enforceable. The dissertation draws attention to the potential of peer review included there and its profound consequences for judicial accountability structures. It is argued that without this kind of ongoing and dynamic peer review of accountability in governance frameworks, judicial review of new governance is difficult and in some cases impossible. This claim has implications for how we understand the concept of soft law, the role of the courts, participation rights, and the legitimacy of governance measures more generally. The experimentalist architecture of judicial decision-making relies upon a wide variety of actors to provide conditions for legitimate and efficient review.
Resumo:
Oligomeric copper(I) clusters are formed by the insertion reaction of copper(I) aryloxides into heterocumulenes. The effect of varying the steric demands of the heterocumulene and the aryloxy group on the nuclearity of the oligomers formed has been probed. Reactions with copper(I)2-methoxyphenoxide and copper(I)2-methylphenoxide with PhNCS result in the formation of hexameric complexes hexakis[N-phenylimino(aryloxy)methanethiolato copper(I)] 3 and 4 respectively. Single crystal X-ray data confirmed the structure of 3. Similar insertion reactions of CS2 with the copper(I) aryloxides formed by 2,6-di-tert-butyl-4-methylphenol and 2,6-dimethylphenol result in oligomeric copper(I) complexes 7 and 8 having the (aryloxy)thioxanthate ligand. Complex 7 was confirmed to be a tetramer from single crystal X-ray crystallography. Reactions carried out with 2-mercaptopyrimidine, which has ligating properties similar to N-alkylimino(aryloxy)methanethiolate, result in the formation of an insoluble polymeric complex 11. The fluorescence spectra of oligomeric complexes are helpful in determining their nuclearity. Ir has been shown that a decrease in the steric requirements of either the heterocumulene or aryloxy parts of the ligand can compensate for steric constraints acid facilitate oligomerization. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
We consider a dense ad hoc wireless network comprising n nodes confined to a given two dimensional region of fixed area. For the Gupta-Kumar random traffic model and a realistic interference and path loss model (i.e., the channel power gains are bounded above, and are bounded below by a strictly positive number), we study the scaling of the aggregate end-to-end throughput with respect to the network average power constraint, P macr, and the number of nodes, n. The network power constraint P macr is related to the per node power constraint, P macr, as P macr = np. For large P, we show that the throughput saturates as Theta(log(P macr)), irrespective of the number of nodes in the network. For moderate P, which can accommodate spatial reuse to improve end-to-end throughput, we observe that the amount of spatial reuse feasible in the network is limited by the diameter of the network. In fact, we observe that the end-to-end path loss in the network and the amount of spatial reuse feasible in the network are inversely proportional. This puts a restriction on the gains achievable using the cooperative communication techniques studied in and, as these rely on direct long distance communication over the network.
Resumo:
Abstract—A new breed of processors like the Cell Broadband Engine, the Imagine stream processor and the various GPU processors emphasize data-level parallelism (DLP) and threadlevel parallelism (TLP) as opposed to traditional instructionlevel parallelism (ILP). This allows them to achieve order-ofmagnitude improvements over conventional superscalar processors for many workloads. However, it is unclear as to how much parallelism of these types exists in current programs. Most earlier studies have largely concentrated on the amount of ILP in a program, without differentiating DLP or TLP. In this study, we investigate the extent of data-level parallelism available in programs in the MediaBench suite. By packing instructions in a SIMD fashion, we observe reductions of up to 91 % (84 % on average) in the number of dynamic instructions, indicating a very high degree of DLP in several applications. I.
Resumo:
We theoretically analyze the performance of transition metal dichalcogenide (MX2) single wall nanotube (SWNT) surround gate MOSFET, in the 10 nm technology node. We consider semiconducting armchair (n, n) SWNT of MoS2, MoSe2, WS2, and WSe2 for our study. The material properties of the nanotubes are evaluated from the density functional theory, and the ballistic device characteristics are obtained by self-consistently solving the Poisson-Schrodinger equation under the non-equilibrium Green's function formalism. Simulated ON currents are in the range of 61-76 mu A for 4.5 nm diameter MX2 tubes, with peak transconductance similar to 175-218 mu S and ON/OFF ratio similar to 0.6 x 10(5)-0.8 x 10(5). The subthreshold slope is similar to 62.22 mV/decade and a nominal drain induced barrier lowering of similar to 12-15 mV/V is observed for the devices. The tungsten dichalcogenide nanotubes offer superior device output characteristics compared to the molybdenum dichalcogenide nanotubes, with WSe2 showing the best performance. Studying SWNT diameters of 2.5-5 nm, it is found that increase in diameter provides smaller carrier effective mass and 4%-6% higher ON currents. Using mean free path calculation to project the quasi-ballistic currents, 62%-75% reduction from ballistic values in drain current in long channel lengths of 100, 200 nm is observed.
Resumo:
It is shown that metric representation of DNA sequences is one-to-one. By using the metric representation method, suppression of nucleotide strings in the DNA sequences is determined. For a DNA sequence, an optimal string length to display genomic signature in chaos game representation is obtained by eliminating effects of the finite sequence. The optimal string length is further shown as a self-similarity limit in computing information dimension. By using the method, self-similarity limits of bacteria complete genomic signatures are further determined.