948 resultados para Constraints-Led Approach
Resumo:
In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.
Resumo:
In the course of attempting to define the bone ""secretome"" using a signal-trap screening approach, we identified a gene encoding a small membrane protein novel to osteoblasts. Although previously identified in silico as ifitm5, no localization or functional studies had been undertaken on this gene. We characterized the expression patterns and localization of this gene in vitro and in vivo and assessed its role in matrix mineralization in vitro. The bone specificity and shown role in mineralization led us to rename the gene bone restricted ifitm-like protein (Bril). Bril encodes a 14.8-kDa 1.34 arnino acid protein with two transmembrane domains. Northern blot analysis showed bone-specific expression with no expression in other embryonic or adult tissues. In situ hybridization and immunohistochemistry in mouse embryos showed expression localized on the developing bone. Screening of cell lines showed Bril expression to be highest in osteoblasts, associated with the onset of matrix maturation/mineralization, suggesting a role in bone formation. Functional evidence of a role in mineralization was shown by adenovirus-mediated Brit overexpression and lentivirus-mediated Bril shRNA knockdown in vitro. Elevated Bril resulted in dose-dependent increases in mineralization in UMR106 and rat primary osteoblasts. Conversely, knockdown of Bril in MC3T3 osteoblasts resulted in reduced mineralization. Thus, we identified Bril as a novel osteoblast protein and showed a role in mineralization, possibly identifying a new regulatory pathway in bone formation.
Resumo:
The Epiphany (TM) Sealer is a new dual-curing resin-based sealer and has been introduced as an alternative to gutta-percha and traditional root canal sealers. The canal filling is claimed to create a seal with the dentinal tubules within the root canal system producing a `monoblock` effect between the sealer and dentinal tubules. Therefore, considering the possibility to incorporate the others adhesive systems, it is important to study the bond strength of the resulting cement. Forty-eight root mandibular canines were sectioned 8-mm below CEJ. The dentine discs were prepared using a tapered diamond bur and irrigated with 1% NaOCl and 17% EDTA. Previous the application Epiphany (TM) Sealer, the Epiphany (TM) Primer, AdheSE, and One Up Bond F were applied to the root canal walls. The LED and QTH (Quartz Tungsten Halogen) were used to photo-activation during 45 s with power density of 400 and 720 mW/cm(2), respectively. The specimens were performed on a universal testing machine at a cross-head speed of 1 mm/min until bond failure occurred. The force was recorded and the debonding values were used to calculate Push-out bond strength. The analysis of variance (ANOVA) and Tukey`s post-hoc tests showed significant statistical differences (P < 0.05) to Epiphany (TM) Sealer/Epiphany (TM) Primer/QTH and EpiphanyTM Sealer/AdheSE/QTH, which had the highest mean values of bond strength. The efficiency of resin-based filling materials are dependent the type of light curing unit used including the power density, the polymerization characteristics of these resin-based filling materials, depending on the primer/adhesive used.
Resumo:
This study (a) examined the multidimensionality of both group cohesion and group performance, (b) investigated the relationship between group-level task and social cohesion and group effectiveness, and (c) examined the longitudinal changes in cohesion and performance and the direction of effect between cohesion and performance. First, the authors hypothesized that both task and social cohesion would predict positively all dimensions of group performance. Second, that a stronger relationship would be observed between task cohesion and task effectiveness and between social cohesion and system viability. Third, that all dimensions of cohesion and performance would increase over time. Finally, that cohesion would be both the antecedent and the consequence of performance but that the performance-cohesion relationship would be stronger than the cohesion-performance relationship. Results supported the hypothesized one-to-one relationship between specific dimensions of group cohesion and group performance. Task cohesion was the sole predictor of self-rated performance at both Time 1 and Time 2, whereas social cohesion was the only predictor of system viability at Time 1 and the stronger predictor at Time 2. Social cohesion at Time 2 predicted performance on group task. However, no longitudinal changes were found in cohesion or performance. Finally, group cohesion was found to be the antecedent, but not the consequence, of group performance.
Resumo:
A range of topical products are used in veterinary medicine. The efficacy of many of these products has been enhanced by the addition of penetration enhancers. Evolution has led to not only a highly specialized skin in animals and humans, but also one whose anatomical structure and skin permeability differ between the various species. The skin provides an excellent barrier against the ingress of environmental contaminants, toxins, and microorganisms while performing a homeostatic role to permit terrestrial life. Over the past few years, major advances have been made in the field of transdermal drug delivery. An increasing number of drugs are being added to the list of therapeutic agents that can be delivered via the skin to the systemic circulation where clinically effective concentrations are reached. The therapeutic benefits of topically applied veterinary products is achieved in spite of the inherent protective functions of the stratum corneum (SQ, one of which is to exclude foreign substances from entering the body. Much of the recent success in this field is attributable to the rapidly expanding knowledge of the SC barrier structure and function. The bilayer domains of the intercellular lipid matrices within the SC form an excellent penetration barrier, which must be breached if poorly penetrating drugs are to be administered at an appropriate rate. One generalized approach to overcoming the barrier properties of the skin for drugs and biomolecules is the incorporation of suitable vehicles or other chemical compounds into a transdermal delivery system. Indeed, the incorporation of such compounds has become more prevalent and is a growing trend in transdermal drug delivery. Substances that help promote drug diffusion through the SC and epidermis are referred to as penetration enhancers, accelerants, adjuvants, or sorption promoters. It is interesting to note that many pour-on and spot-on formulations used in veterinary medicine contain inert ingredients (e.g., alcohols, amides, ethers, glycols, and hydrocarbon oils) that will act as penetration enhancers. These substances have the potential to reduce the capacity for drug binding and interact with some components of the skin, thereby improving drug transport. However, their inclusion in veterinary products with a high-absorbed dose may result in adverse dermatological reactions (e.g., toxicological irritations) and concerns about tissue residues. These a-re important considerations when formulating a veterinary transdermal product when such compounds ate added, either intentionally or otherwise, for their penetration enhancement ability. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Studies of alcoholism etiology often focus on genetic or psy-chosocial approaches, but not both. Greater understanding of the etiology of alcohol, tobacco and other addictions will come from integration of these research traditions. A research approach is outlined to test three models for the etiology of addictions — behavioral undercontrol, pharmacologic vulnerability, negative affect regulation — addressing key questions including (i) mediators of genetic effects, (ii) genotype-environment correlation effects, (iii) genotype x environment interaction effects, (iv) the developmental unfolding of genetic and environmental effects, (v) subtyping including identification of distinct trajectories of substance involvement, (vi) identification of individual genes that contribute to risk, and (vii) the consequences of excessive use. By using coordinated research designs, including prospective assessment of adolescent twins and their siblings and parents; of adult substance dependent and control twins and their MZ and DZ cotwins, the spouses of these pairs, and their adolescent offspring; and of regular families; by selecting for gene-mapping approaches sibships screened for extreme concordance or discordance on quantitative indices of substance use; and by using experimental (drug challenge) as well as survey approaches, a number of key questions concerning addiction etiology can be addressed. We discuss complementary strengths and weaknesses of different sampling strategies, as well as methods to implement such an integrated approach illustrated for the study of alcoholism etiology. A coordinated program of twin and family studies will allow a comprehensive dissection of the interplay of genetic and environmental risk-factors in the etiology of alcoholism and other addictions.
Resumo:
This special issue presents an excellent opportunity to study applied epistemology in public policy. This is an important task because the arena of public policy is the social domain in which macro conditions for ‘knowledge work’ and ‘knowledge industries’ are defined and created. We argue that knowledge-related public policy has become overly concerned with creating the politico-economic parameters for the commodification of knowledge. Our policy scope is broader than that of Fuller (1988), who emphasizes the need for a social epistemology of science policy. We extend our focus to a range of policy documents that include communications, science, education and innovation policy (collectively called knowledge-related public policy in acknowledgement of the fact that there is no defined policy silo called ‘knowledge policy’), all of which are central to policy concerned with the ‘knowledge economy’ (Rooney and Mandeville, 1998). However, what we will show here is that, as Fuller (1995) argues, ‘knowledge societies’ are not industrial societies permeated by knowledge, but that knowledge societies are permeated by industrial values. Our analysis is informed by an autopoietic perspective. Methodologically, we approach it from a sociolinguistic position that acknowledges the centrality of language to human societies (Graham, 2000). Here, what we call ‘knowledge’ is posited as a social and cognitive relationship between persons operating on and within multiple social and non-social (or, crudely, ‘physical’) environments. Moreover, knowing, we argue, is a sociolinguistically constituted process. Further, we emphasize that the evaluative dimension of language is most salient for analysing contemporary policy discourses about the commercialization of epistemology (Graham, in press). Finally, we provide a discourse analysis of a sample of exemplary texts drawn from a 1.3 million-word corpus of knowledge-related public policy documents that we compiled from local, state, national and supranational legislatures throughout the industrialized world. Our analysis exemplifies a propensity in policy for resorting to technocratic, instrumentalist and anti-intellectual views of knowledge in policy. We argue that what underpins these patterns is a commodity-based conceptualization of knowledge, which is underpinned by an axiology of narrowly economic imperatives at odds with the very nature of knowledge. The commodity view of knowledge, therefore, is flawed in its ignorance of the social systemic properties of ��knowing’.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In this study, we have compared the effector functions and fate of a number of human CTL clones in vitro or ex vivo following contact with variant peptides presented either on the cell surface or in a soluble multimeric format. In the presence of CD8 coreceptor binding, there is a good correlation between TCR signaling, killing of the targets, and Fast-mediated CTL apoptosis. Blocking CD8 binding using (alpha3 domain mutants of MHC class I results in much reduced signaling and reduced killing of the targets. Surprisingly, however, Fast expression is induced to a similar degree on these CTLs, and apoptosis of CTL is unaffected. The ability to divorce these events may allow the deletion of antigen-specific and pathological CTL populations without the deleterious effects induced by full CTL activation.
Resumo:
This paper explores an approach to the implementation and evaluation of integrated health service delivery. It identifies the key issues involved in integration evaluation, provides a framework for assessment and identifies areas for the development of new tools and measures. A proactive role for evaluators in responding to health service reform is advocated.
Resumo:
[GRAPHICS] The stereocontrolled synthesis of (2S,4R,6R,8S,10S,1'R,1"R)-2(acetylhydroxymethyl)-4, 10-dimethyl-8(isopropenylhydroxymethyl)-1, 7-dioxaspiro[5,5]-undecane (4a) and its C1"-epimer (4b), the key mother spiroketals of the HIV-1 protease inhibitive didemnaketals from the ascidian Didemnum sp., has been carried out through multisteps from the natural (R)-(+)-pulegone, which involved the diastereoselective construction of four chiral carbon centers(C-2, C-6, C-8, and C-1') by intramolecular chiral induce.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.