53 resultados para Software Complexity
Resumo:
This paper examines the effects of information request ambiguity and construct incongruence on end user's ability to develop SQL queries with an interactive relational database query language. In this experiment, ambiguity in information requests adversely affected accuracy and efficiency. Incongruities among the information request, the query syntax, and the data representation adversely affected accuracy, efficiency, and confidence. The results for ambiguity suggest that organizations might elicit better query development if end users were sensitized to the nature of ambiguities that could arise in their business contexts. End users could translate natural language queries into pseudo-SQL that could be examined for precision before the queries were developed. The results for incongruence suggest that better query development might ensue if semantic distances could be reduced by giving users data representations and database views that maximize construct congruence for the kinds of queries in typical domains. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Computer assisted learning has an important role in the teaching of pharmacokinetics to health sciences students because it transfers the emphasis from the purely mathematical domain to an 'experiential' domain in which graphical and symbolic representations of actions and their consequences form the major focus for learning. Basic pharmacokinetic concepts can be taught by experimenting with the interplay between dose and dosage interval with drug absorption (e.g. absorption rate, bioavailability), drug distribution (e.g. volume of distribution, protein binding) and drug elimination (e.g. clearance) on drug concentrations using library ('canned') pharmacokinetic models. Such 'what if' approaches are found in calculator-simulators such as PharmaCalc, Practical Pharmacokinetics and PK Solutions. Others such as SAAM II, ModelMaker, and Stella represent the 'systems dynamics' genre, which requires the user to conceptualise a problem and formulate the model on-screen using symbols, icons, and directional arrows. The choice of software should be determined by the aims of the subject/course, the experience and background of the students in pharmacokinetics, and institutional factors including price and networking capabilities of the package(s). Enhanced learning may result if the computer teaching of pharmacokinetics is supported by tutorials, especially where the techniques are applied to solving problems in which the link with healthcare practices is clearly established.
Resumo:
Around 98% of all transcriptional output in humans is noncoding RNA. RNA-mediated gene regulation is widespread in higher eukaryotes and complex genetic phenomena like RNA interference, co-suppression, transgene silencing, imprinting, methylation, and possibly position-effect variegation and transvection, all involve intersecting pathways based on or connected to RNA signaling. I suggest that the central dogma is incomplete, and that intronic and other non-coding RNAs have evolved to comprise a second tier of gene expression in eukaryotes, which enables the integration and networking of complex suites of gene activity. Although proteins are the fundamental effectors of cellular function, the basis of eukaryotic complexity and phenotypic variation may lie primarily in a control architecture composed of a highly parallel system of trans-acting RNAs that relay state information required for the coordination and modulation of gene expression, via chromatin remodeling, RNA-DNA, RNA-RNA and RNA-protein interactions. This system has interesting and perhaps informative analogies with small world networks and dataflow computing.
Resumo:
C. L. Isaac and A. R. Mayes (1999a, 1999b) compared forgetting rates in amnesic patients and normal participants across a range of memory tasks. Although the results are complex, many of them appear to be replicable and there are several commendable features to the design and analysis. Nevertheless, the authors largely ignored 2 relevant literatures: the traditional literature on proactive inhibition/interference and the formal analyses of the complexity of the bindings (associations) required for memory tasks. It is shown how the empirical results and conceptual analyses in these literatures are needed to guide the choice of task, the design of experiments, and the interpretation of results for amnesic patients and normal participants.
Resumo:
A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In the past century, the debate over whether or not density-dependent factors regulate populations has generally focused on changes in mean population density, ignoring the spatial variance around the mean as unimportant noise. In an attempt to provide a different framework for understanding population dynamics based on individual fitness, this paper discusses the crucial role of spatial variability itself on the stability of insect populations. The advantages of this method are the following: (1) it is founded on evolutionary principles rather than post hoc assumptions; (2) it erects hypotheses that can be tested; and (3) it links disparate ecological schools, including spatial dynamics, behavioral ecology, preference-performance, and plant apparency into an overall framework. At the core of this framework, habitat complexity governs insect spatial variance. which in turn determines population stability. First, the minimum risk distribution (MRD) is defined as the spatial distribution of individuals that results in the minimum number of premature deaths in a population given the distribution of mortality risk in the habitat (and, therefore, leading to maximized population growth). The greater the divergence of actual spatial patterns of individuals from the MRD, the greater the reduction of population growth and size from high, unstable levels. Then, based on extensive data from 29 populations of the processionary caterpillar, Ochrogaster lunifer, four steps are used to test the effect of habitat interference on population growth rates. (1) The costs (increasing the risk of scramble competition) and benefits (decreasing the risk of inverse density-dependent predation) of egg and larval aggregation are quantified. (2) These costs and benefits, along with the distribution of resources, are used to construct the MRD for each habitat. (3) The MRD is used as a benchmark against which the actual spatial pattern of individuals is compared. The degree of divergence of the actual spatial pattern from the MRD is quantified for each of the 29 habitats. (4) Finally, indices of habitat complexity are used to provide highly accurate predictions of spatial divergence from the MRD, showing that habitat interference reduces population growth rates from high, unstable levels. The reason for the divergence appears to be that high levels of background vegetation (vegetation other than host plants) interfere with female host-searching behavior. This leads to a spatial distribution of egg batches with high mortality risk, and therefore lower population growth. Knowledge of the MRD in other species should be a highly effective means of predicting trends in population dynamics. Species with high divergence between their actual spatial distribution and their MRD may display relatively stable dynamics at low population levels. In contrast, species with low divergence should experience high levels of intragenerational population growth leading to frequent habitat-wide outbreaks and unstable dynamics in the long term. Six hypotheses, erected under the framework of spatial interference, are discussed, and future tests are suggested.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Let g be the genus of the Hermitian function field H/F(q)2 and let C-L(D,mQ(infinity)) be a typical Hermitian code of length n. In [Des. Codes Cryptogr., to appear], we determined the dimension/length profile (DLP) lower bound on the state complexity of C-L(D,mQ(infinity)). Here we determine when this lower bound is tight and when it is not. For m less than or equal to n-2/2 or m greater than or equal to n-2/2 + 2g, the DLP lower bounds reach Wolf's upper bound on state complexity and thus are trivially tight. We begin by showing that for about half of the remaining values of m the DLP bounds cannot be tight. In these cases, we give a lower bound on the absolute state complexity of C-L(D,mQ(infinity)), which improves the DLP lower bound. Next we give a good coordinate order for C-L(D,mQ(infinity)). With this good order, the state complexity of C-L(D,mQ(infinity)) achieves its DLP bound (whenever this is possible). This coordinate order also provides an upper bound on the absolute state complexity of C-L(D,mQ(infinity)) (for those values of m for which the DLP bounds cannot be tight). Our bounds on absolute state complexity do not meet for some of these values of m, and this leaves open the question whether our coordinate order is best possible in these cases. A straightforward application of these results is that if C-L(D,mQ(infinity)) is self-dual, then its state complexity (with respect to the lexicographic coordinate order) achieves its DLP bound of n /2 - q(2)/4, and, in particular, so does its absolute state complexity.
Resumo:
Many business-oriented software applications are subject to frequent changes in requirements. This paper shows that, ceteris paribus, increases in the volatility of system requirements decrease the reliability of software. Further, systems that exhibit high volatility during the development phase are likely to have lower reliability during their operational phase. In addition to the typically higher volatility of requirements, end-users who specify the requirements of business-oriented systems are usually less technically oriented than people who specify the requirements of compilers, radar tracking systems or medical equipment. Hence, the characteristics of software reliability problems for business-oriented systems are likely to differ significantly from those of more technically oriented systems.
Resumo:
We reinterpret the state space dimension equations for geometric Goppa codes. An easy consequence is that if deg G less than or equal to n-2/2 or deg G greater than or equal to n-2/2 + 2g then the state complexity of C-L(D, G) is equal to the Wolf bound. For deg G is an element of [n-1/2, n-3/2 + 2g], we use Clifford's theorem to give a simple lower bound on the state complexity of C-L(D, G). We then derive two further lower bounds on the state space dimensions of C-L(D, G) in terms of the gonality sequence of F/F-q. (The gonality sequence is known for many of the function fields of interest for defining geometric Goppa codes.) One of the gonality bounds uses previous results on the generalised weight hierarchy of C-L(D, G) and one follows in a straightforward way from first principles; often they are equal. For Hermitian codes both gonality bounds are equal to the DLP lower bound on state space dimensions. We conclude by using these results to calculate the DLP lower bound on state complexity for Hermitian codes.
Resumo:
This paper characterizes when a Delone set X in R-n is an ideal crystal in terms of restrictions on the number of its local patches of a given size or on the heterogeneity of their distribution. For a Delone set X, let N-X (T) count the number of translation-inequivalent patches of radius T in X and let M-X (T) be the minimum radius such that every closed ball of radius M-X(T) contains the center of a patch of every one of these kinds. We show that for each of these functions there is a gap in the spectrum of possible growth rates between being bounded and having linear growth, and that having sufficiently slow linear growth is equivalent to X being an ideal crystal. Explicitly, for N-X (T), if R is the covering radius of X then either N-X (T) is bounded or N-X (T) greater than or equal to T/2R for all T > 0. The constant 1/2R in this bound is best possible in all dimensions. For M-X(T), either M-X(T) is bounded or M-X(T) greater than or equal to T/3 for all T > 0. Examples show that the constant 1/3 in this bound cannot be replaced by any number exceeding 1/2. We also show that every aperiodic Delone set X has M-X(T) greater than or equal to c(n)T for all T > 0, for a certain constant c(n) which depends on the dimension n of X and is > 1/3 when n > 1.
Resumo:
Teaching the PSP: Challenges and Lessons Learned by Jurgen Borstler, David Carrington, Gregory W Hislop, Susan Lisack, Keith Olson, and Laurie Williams, pp. 42-48. Soft-ware engineering educators need to provide environments where students learn about the size and complexity of modern software systems and the techniques available for managing these difficulties. Five universities used the Personal Software Process to teach software engineering concepts in a variety of contexts.