964 resultados para Explicit Expressions
Resumo:
The increasing emphasis on evidence-based clinical practice has thrown into sharp focus multiple deficiencies in current systems of ethical review. This paper argues that a complete overhaul of systems for ethical oversight of studies involving human subjects is now required as developments in medical, epidemiological and genetic research have outstripped existing structures for ethical supervision. It shows that many problems are now evident and concludes that sequential and piecemeal amendments to present arrangements an inadequate to address these. Ar their core present systems of ethical review still rely on the integrity and judgement of individual investigators. One possible alternative is to train and license research investigators, make explicit their responsibilities and have ethics committees devote much more of their time to monitoring research activity in order to detect those infringing the rules.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
Two families, originally diagnosed as having nonsyndromic X-linked mental retardation (NSXLMR), were reviewed when it was shown that they had a 24-bp duplication (428-45 1dup(24bp)) in the ARX gene [Stromme et al., 2002: Nat Genet 30:441-445]. This same duplication had also been found in three other families: one with X-linked infantile spasms and hypsarrhythmia (X-linked West syndrome, MIM 308350) and two with XLMR and dystonic movements of the hands (Partington syndrome, MIM 309510). On review, manifestations of both West and Partington syndromes were found in some individuals from both families. In addition, it was found that one individual had autism and two had autistic behavior, one of whom had epilepsy. The degree of mental retardation ranged from mild to severe. A GCG trinucleotide expansion (GCG)10+7 and a deletion of 1,517 by in the ARX gene have also been found in association with the West syndrome, and a missense mutation (1058C >T) in a family with a newly recognized form of myoclonic epilepsy, severe mental retardation, and spastic paraplegia [Scheffer et al., 2002: Neurology, in press]. Evidently all these disorders are expressions of mutations in the same gene. It remains to be seen what proportions of patients with infantile spasms, focal dystonia, autism, epilepsy, and nonsyndromic mental retardation are accounted for by mutations in the ARX gene. (C) 2002 Wiley-Liss, Inc.
Resumo:
In contrast to curative therapies, preventive therapies are administered to largely healthy individuals over long periods. The risk-benefit and cost-benefit ratios are more likely to be unfavourable, making treatment decisions difficult. Drug trials provide insufficient information for treatment decisions, as they are conducted on highly selected populations over short durations, estimate only relative benefits of treatment and offer little information on risks and costs. Epidemiological modelling is a method of combining evidence from observational epidemiology and clinical trials to assist in clinical and health policy decision-making. It can estimate absolute benefits, risks and costs of long-term preventive strategies, and thus allow their precise targeting to individuals for whom they are safest and most cost-effective. Epidemiological modelling also allows explicit information about risks and benefits of therapy to be presented to patients, facilitating informed decision-making.
Resumo:
We study the transformation of maximally entangled states under the action of Lorentz transformations in a fully relativistic setting. By explicit calculation of the Wigner rotation, we describe the relativistic analog of the Bell states as viewed from two inertial frames moving with constant velocity with respect to each other. Though the finite dimensional matrices describing the Lorentz transformations are non-unitary, each single particle state of the entangled pair undergoes an effective, momentum dependent, local unitary rotation, thereby preserving the entanglement fidelity of the bipartite state. The details of how these unitary transformations are manifested are explicitly worked out for the Bell states comprised of massive spin 1/2 particles and massless photon polarizations. The relevance of this work to non-inertial frames is briefly discussed.
Resumo:
Event-related potentials (ERPs) were recorded while subjects made old/new recognition judgments on new unstudied words and old words which had been presented at study either once ('weak') or three times ('strong'). The probability of an 'old' response was significantly higher for strong than weak words and significantly higher for weak than new words. Comparisons were made initially between ERPs to new, weak and strong words, and subsequently between ERPs associated with six strength-by-response conditions. The N400 component was found to be modulated by memory trace strength in a graded manner. Its amplitude was most negative in new word ERPs and most positive in strong word ERPs. This 'N400 strength effect' was largest at the left parietal electrode (in ear-referenced ERPs). The amplitude of the late positive complex (LPC) effect was sensitive to decision accuracy (and perhaps confidence). Its amplitude was larger in ERPs evoked by words attracting correct versus incorrect recognition decisions. The LPC effect had a left > right, centro-parietal scalp topography (in ear-referenced ERPs). Hence, whereas, the majority of previous ERP studies of episodic recognition have interpreted results from the perspective of dual-process models, we provide alternative interpretations of N400 and LPC old/new effects in terms of memory strength and decisional factor(s). (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
The integral of the Wigner function of a quantum-mechanical system over a region or its boundary in the classical phase plane, is called a quasiprobability integral. Unlike a true probability integral, its value may lie outside the interval [0, 1]. It is characterized by a corresponding selfadjoint operator, to be called a region or contour operator as appropriate, which is determined by the characteristic function of that region or contour. The spectral problem is studied for commuting families of region and contour operators associated with concentric discs and circles of given radius a. Their respective eigenvalues are determined as functions of a, in terms of the Gauss-Laguerre polynomials. These polynomials provide a basis of vectors in a Hilbert space carrying the positive discrete series representation of the algebra su(1, 1) approximate to so(2, 1). The explicit relation between the spectra of operators associated with discs and circles with proportional radii, is given in terms of the discrete variable Meixner polynomials.
Resumo:
An assessment of the changes in the distribution and extent of mangroves within Moreton Bay, southeast Queensland, Australia, was carried out. Two assessment methods were evaluated: spatial and temporal pattern metrics analysis, and change detection analysis. Currently, about 15,000 ha of mangroves are present in Moreton Bay. These mangroves are important ecosystems, but are subject to disturbance from a number of sources. Over the past 25 years, there has been a loss of more than 3800 ha, as a result of natural losses and mangrove clearing (e.g. for urban and industrial development, agriculture and aquaculture). However, areas of new mangroves have become established over the same time period, offsetting these losses to create a net loss of about 200 ha. These new mangroves have mainly appeared in the southern bay region and the bay islands, particularly on the landward edge of existing mangroves. In addition, spatial patterns and species composition of mangrove patches have changed. The pattern metrics analysis provided an overview of mangrove distribution and change in the form of single metric values, while the change detection analysis gave a more detailed and spatially explicit description of change. An analysis of the effects of spatial scales on the pattern metrics indicated that they were relatively insensitive to scale at spatial resolutions less than 50 m, but that most metrics became sensitive at coarser resolutions, a finding which has implications for mapping of mangroves based on remotely sensed data. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Rumors collected from a large public hospital undergoing change were content analyzed, and a typology comprising the following five broad types of change-related rumors was developed: rumors about changes to job and working conditions, nature of organizational change, poor change management, consequences of the change for organizational performance, and gossip-rumors. Rumors were also classified as positive or negative on the basis of their content. As predicted, negative rumors were more prevalent than positive rumors. Finally, employees reporting negative rumors also reported more change-related stress as compared to those who reported positive rumors and those who did not report any rumors. The authors propose that rumors be treated as verbal symbols and expressions of employee concerns during organizational change.
Resumo:
This article recalls a classic scheme for categorizing attitude measures. One particular group of measures, those that rely on respondents' interpretations of partially structured stimuli, has virtually disappeared from attitude research. An attitude measure based on respondents' interpretation of partially structured stimuli is considered. Four studies employing such a measure demonstrate that it predicts unique variance in self-reported and actual behavior, beyond that predicted by explicit and contemporary implicit measures and regardless of whether the attitude object under consideration is wrought with social desirability concerns. Implications for conceptualizing attitude measurement and attitude-behavior relations are discussed.
Resumo:
This paper uses a fully operational inter-regional computable general equilibrium (CGE) model implemented for the Brazilian economy, based on previous work by Haddad and Hewings, in order to assess the likely economic effects of road transportation policy changes in Brazil. Among the features embedded in this framework, modelling of external scale economies and transportation costs provides an innovative way of dealing explicitly with theoretical issues related to integrated regional systems. The model is calibrated for 109 regions. The explicit modelling of transportation costs built into the inter-regional CGE model, based on origin-destination flows, which takes into account the spatial structure of the Brazilian economy, creates the capability of integrating the inter-regional CGE model with a geo-coded transportation network model enhancing the potential of the framework in understanding the role of infrastructure on regional development. The transportation model used is the so-called Highway Development and Management, developed by the World Bank, implemented using the software TransCAD. Further extensions of the current model specification for integrating other features of transport planning in a continental industrialising country like Brazil are discussed, with the goal of building a bridge between conventional transport planning practices and the innovative use of CGE models. In order to illustrate the analytical power of the integrated system, the authors present a set of simulations, which evaluate the ex ante economic impacts of physical/qualitative changes in the Brazilian road network (for example, a highway improvement), in accordance with recent policy developments in Brazil. Rather than providing a critical evaluation of this debate, they intend to emphasise the likely structural impacts of such policies. They expect that the results will reinforce the need to better specifying spatial interactions in inter-regional CGE models.
Resumo:
This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Independent brain circuits appear to underlie different forms of conditioned fear, depending on the type of conditioning used, such as a context or explicit cue paired with footshocks. Several clinical reports have associated damage to the medial temporal lobe (MTL) with retrograde amnesia. Although a number of studies have elucidated the neural circuits underlying conditioned fear, the involvement of MTL components in the aversive conditioning paradigm is still unclear. To address this issue, we assessed freezing responses and Fos protein expression in subregions of the rhinal cortex and ventral hippocampus of rats following exposure to a context, light or tone previously paired with footshock (Experiment 1). A comparable degree of freezing was observed in the three types of conditioned fear, but with distinct patterns of Fos distribution. The groups exposed to cued fear conditioning did not show changes in Fos expression, whereas the group subjected to contextual fear conditioning showed selective activation of the ectorhinal (Ect), perirhinal (Per), and entorhinal (Ent) cortices, with no changes in the ventral hippocampus. We then examined the effects of the benzodiazepine midazolam injected bilaterally into these three rhinal subregions in the expression of contextual fear conditioning (Experiment 2). Midazolam administration into the Ect, Per, and Ent reduced freezing responses. These findings suggest that contextual and explicit stimuli endowed with aversive properties through conditioning recruit distinct brain areas, and the rhinal cortex appears to be critical for storing context-, but not explicit cue-footshock, associations. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A long-standing challenge of content-based image retrieval (CBIR) systems is the definition of a suitable distance function to measure the similarity between images in an application context which complies with the human perception of similarity. In this paper, we present a new family of distance functions, called attribute concurrence influence distances (AID), which serve to retrieve images by similarity. These distances address an important aspect of the psychophysical notion of similarity in comparisons of images: the effect of concurrent variations in the values of different image attributes. The AID functions allow for comparisons of feature vectors by choosing one of two parameterized expressions: one targeting weak attribute concurrence influence and the other for strong concurrence influence. This paper presents the mathematical definition and implementation of the AID family for a two-dimensional feature space and its extension to any dimension. The composition of the AID family with L (p) distance family is considered to propose a procedure to determine the best distance for a specific application. Experimental results involving several sets of medical images demonstrate that, taking as reference the perception of the specialist in the field (radiologist), the AID functions perform better than the general distance functions commonly used in CBIR.