84 resultados para Models Of Data
Resumo:
To develop real-time simulations of wind instruments, digital waveguides filters can be used as an efficient representation of the air column. Many aerophones are shaped as horns which can be approximated using conical sections. Therefore the derivation of conical waveguide filters is of special interest. When these filters are used in combination with a generalized reed excitation, several classes of wind instruments can be simulated. In this paper we present the methods for transforming a continuous description of conical tube segments to a discrete filter representation. The coupling of the reed model with the conical waveguide and a simplified model of the termination at the open end are described in the same way. It turns out that the complete lossless conical waveguide requires only one type of filter.Furthermore, we developed a digital reed excitation model, which is purely based on numerical integration methods, i.e., without the use of a look-up table.
Resumo:
The use of bit-level systolic array circuits as building blocks in the construction of larger word-level systolic systems is investigated. It is shown that the overall structure and detailed timing of such systems may be derived quite simply using the dependence graph and cut-set procedure developed by S. Y. Kung (1988). This provides an attractive and intuitive approach to the bit-level design of many VLSI signal processing components. The technique can be applied to ripple-through and partly pipelined circuits as well as fully systolic designs. It therefore provides a means of examining the relative tradeoff between levels of pipelining, chip area, power consumption, and throughput rate within a given VLSI design.
Resumo:
The two critical forms of dementia are Alzheimer's disease (AD) and vascular dementia (VD).The alterations of Ca2+/calmodulin/CaMKII/CaV1.2 signaling in AD and VD have not been well elucidated. Here we have demonstrated changes in the levels of CaV1.2, calmodulin, p-CaMKII, p-CREB and BDNF proteins by Western blot analysis and the co-localization of p-CaMKII/CaV1.2 by double-labeling immunofluorescence in the hippocampus of APP/PS1 mice and VD gerbils. Additionally, expression of these proteins and intracellular calcium levels were examined in cultured neurons treated with Aß1–42. The expression of CaV1.2 protein was increased in VD gerbils and in cultured neurons but decreased in APP/PS1 mice; the expression of calmodulin protein was increased in APP/PS1 mice and VD gerbils; levels of p-CaMKII, p-CREB and BDNF proteins were decreased in AD and VD models. The number of neurons in which p-CaMKII and CaV1.2 were co-localized, was decreased in the CA1 and CA3 regions in two models. Intracellular calcium was increased in the cultured neurons treated with Aß1–42. Collectively, our results suggest that the alterations in CaV1.2, calmodulin, p-CaMKII, p-CREB and BDNF can be reflective of an involvement in the impairment in memory and cognition in AD and VD models.
Resumo:
Motivated by recent models involving off-centre ignition of Type Ia supernova explosions, we undertake three-dimensional time-dependent radiation transport simulations to investigate the range of bolometric light-curve properties that could be observed from supernovae in which there is a lop-sided distribution of the products from nuclear burning. We consider both a grid of artificial toy models which illustrate the conceivable range of effects and a recent three-dimensional hydrodynamical explosion model. We find that observationally significant viewing angle effects are likely to arise in such supernovae and that these may have important ramifications for the interpretation of the observed diversity of Type Ia supernova and the systematic uncertainties which relate to their use as standard candles in contemporary cosmology. © 2007 RAS.
Resumo:
Data obtained with any research tool must be reproducible, a concept referred to as reliability. Three techniques are often used to evaluate reliability of tools using continuous data in aging research: intraclass correlation coefficients (ICC), Pearson correlations, and paired t tests. These are often construed as equivalent when applied to reliability. This is not correct, and may lead researchers to select instruments based on statistics that may not reflect actual reliability. The purpose of this paper is to compare the reliability estimates produced by these three techniques and determine the preferable technique. A hypothetical dataset was produced to evaluate the reliability estimates obtained with ICC, Pearson correlations, and paired t tests in three different situations. For each situation two sets of 20 observations were created to simulate an intrarater or inter-rater paradigm, based on 20 participants with two observations per participant. Situations were designed to demonstrate good agreement, systematic bias, or substantial random measurement error. In the situation demonstrating good agreement, all three techniques supported the conclusion that the data were reliable. In the situation demonstrating systematic bias, the ICC and t test suggested the data were not reliable, whereas the Pearson correlation suggested high reliability despite the systematic discrepancy. In the situation representing substantial random measurement error where low reliability was expected, the ICC and Pearson coefficient accurately illustrated this. The t test suggested the data were reliable. The ICC is the preferred technique to measure reliability. Although there are some limitations associated with the use of this technique, they can be overcome.
Resumo:
In many applications in applied statistics researchers reduce the complexity of a data set by combining a group of variables into a single measure using factor analysis or an index number. We argue that such compression loses information if the data actually has high dimensionality. We advocate the use of a non-parametric estimator, commonly used in physics (the Takens estimator), to estimate the correlation dimension of the data prior to compression. The advantage of this approach over traditional linear data compression approaches is that the data does not have to be linearized. Applying our ideas to the United Nations Human Development Index we find that the four variables that are used in its construction have dimension three and the index loses information.
Resumo:
High-quality data from appropriate archives are needed for the continuing improvement of radiocarbon calibration curves. We discuss here the basic assumptions behind 14C dating that necessitate calibration and the relative strengths and weaknesses of archives from which calibration data are obtained. We also highlight the procedures, problems and uncertainties involved in determining atmospheric and surface ocean 14C/12C in these archives, including a discussion of the various methods used to derive an independent absolute timescale and uncertainty. The types of data required for the current IntCal database and calibration curve model are tabulated with examples.
Resumo:
A good understanding of the different theoretical models is essential when working in the field of mental health. Not only does it help with understanding experiences of mental health difficulties and to find meaning, but it also provides a framework for expanding our knowledge of the field.
As part of the Foundations of Mental Health Practice series, this book provides a critical overview of the theoretical perspectives relevant to mental health practice. At the core of this book is the idea that no single theory is comprehensive on its own and each theory has its limitations. Divided in to two parts, Part I explores traditional models of mental health and covers the key areas: bio-medical perspectives, psychological perspectives and social perspectives, whilst Part II looks at contemporary ideas that challenge and push these traditional views. The contributions, strengths and limitations of each model are explored and, as a result, the book encourages a more holistic, open approach to understanding and responding to mental health issues.
Together, these different approaches offer students and practitioners a powerful set of perspectives from which to approach their study and careers. Each model is covered in a clear and structured way with supporting exercises and case studies. It is an essential text for anyone studying or practising in the field of mental health, including social workers, nurses and psychologists.
Resumo:
The operant learning theory account of behaviors of clinical significance in people with intellectual disability (ID) has dominated the field for nearly 50 years. However, in the last two decades, there has been a substantial increase in published research that describes the behavioral phenotypes of genetic disorders and shows that behaviors such as self-injury and aggression are more common in some syndromes than might be expected given group characteristics. These cross-syndrome differences in prevalence warrant explanation, not least because this observation challenges an exclusively operant learning theory account. To explore this possible conflict between theoretical account and empirical observation, we describe the genetic cause and physical, social, cognitive and behavioral phenotypes of four disorders associated with ID (Angleman, Cornelia de Lange, Prader-Willi and Smith-Magenis syndromes) and focus on the behaviors of clinical significance in each syndrome. For each syndrome we then describe a model of the interactions between physical characteristics, cognitive and motivational endophenotypes and environmental factors (including operant reinforcement) to account for the resultant behavioral phenotype. In each syndrome it is possible to identify pathways from gene to physical phenotype to cognitive or motivational endophenotype to behavior to environment and back to behavior. We identify the implications of these models for responsive and early intervention and the challenges for research in this area. We identify a pressing need for meaningful dialog between different disciplines to construct better informed models that can incorporate all relevant and robust empirical evidence.
Resumo:
Nasal congestion is one of the most troublesome symptoms of many upper airways diseases. We characterized the effect of selective α2c-adrenergic agonists in animal models of nasal congestion. In porcine mucosa tissue, compound A and compound B contracted nasal veins with only modest effects on arteries. In in vivo experiments, we examined the nasal decongestant dose-response characteristics, pharmacokinetic/pharmacodynamic relationship, duration of action, potential development of tolerance, and topical efficacy of α2c-adrenergic agonists. Acoustic rhinometry was used to determine nasal cavity dimensions following intranasal compound 48/80 (1%, 75 µl). In feline experiments, compound 48/80 decreased nasal cavity volume and minimum cross-sectional areas by 77% and 40%, respectively. Oral administration of compound A (0.1-3.0 mg/kg), compound B (0.3-5.0 mg/kg), and d-pseudoephedrine (0.3 and 1.0 mg/kg) produced dose-dependent decongestion. Unlike d-pseudoephedrine, compounds A and B did not alter systolic blood pressure. The plasma exposure of compound A to produce a robust decongestion (EC(80)) was 500 nM, which related well to the duration of action of approximately 4.0 hours. No tolerance to the decongestant effect of compound A (1.0 mg/kg p.o.) was observed. To study the topical efficacies of compounds A and B, the drugs were given topically 30 minutes after compound 48/80 (a therapeutic paradigm) where both agents reversed nasal congestion. Finally, nasal-decongestive activity was confirmed in the dog. We demonstrate that α2c-adrenergic agonists behave as nasal decongestants without cardiovascular actions in animal models of upper airway congestion.