891 resultados para post-Newtonian approximation to general relativity
Resumo:
In this paper we present the design and analysis of an intonation model for text-to-speech (TTS) synthesis applications using a combination of Relational Tree (RT) and Fuzzy Logic (FL) technologies. The model is demonstrated using the Standard Yorùbá (SY) language. In the proposed intonation model, phonological information extracted from text is converted into an RT. RT is a sophisticated data structure that represents the peaks and valleys as well as the spatial structure of a waveform symbolically in the form of trees. An initial approximation to the RT, called Skeletal Tree (ST), is first generated algorithmically. The exact numerical values of the peaks and valleys on the ST is then computed using FL. Quantitative analysis of the result gives RMSE of 0.56 and 0.71 for peak and valley respectively. Mean Opinion Scores (MOS) of 9.5 and 6.8, on a scale of 1 - -10, was obtained for intelligibility and naturalness respectively.
Resumo:
Drawing on data from a recent research international research project, this article focuses on the challenges faced by teachers of English to young learners against the backdrop of the global rise of English. A mixed-methods approach was used to obtain the data, including a survey, which was completed by 4,459 teachers worldwide, and case studies, including observations and interviews with teachers, in five different primary schools in five different countries. A number of challenges emerged as affecting large numbers of teachers in different educational contexts, namely, teaching speaking, motivation, differentiating learning, teaching large classes, discipline, teaching writing, and teaching grammar. Importantly, some of these challenges have not been highlighted in the literature on young learner teaching to date. Other challenges are more localised, such as developing teachers' English competence. The article argues that teacher education should focus less on introducing teachers to general approaches to English language teaching and more on supporting teachers to meet the challenges that they have identified.
Resumo:
* Supported by the Army Research Office under grant DAAD-19-02-10059.
Resumo:
The finding that Pareto distributions are adequate to model Internet packet interarrival times has motivated the proposal of methods to evaluate steady-state performance measures of Pareto/D/1/k queues. Some limited analytical derivation for queue models has been proposed in the literature, but their solutions are often of a great mathematical challenge. To overcome such limitations, simulation tools that can deal with general queueing system must be developed. Despite certain limitations, simulation algorithms provide a mechanism to obtain insight and good numerical approximation to parameters of queues. In this work, we give an overview of some of these methods and compare them with our simulation approach, which are suited to solve queues with Generalized-Pareto interarrival time distributions. The paper discusses the properties and use of the Pareto distribution. We propose a real time trace simulation model for estimating the steady-state probability showing the tail-raising effect, loss probability, delay of the Pareto/D/1/k queue and make a comparison with M/D/1/k. The background on Internet traffic will help to do the evaluation correctly. This model can be used to study the long- tailed queueing systems. We close the paper with some general comments and offer thoughts about future work.
Resumo:
Recent changes to the legislation on chemicals and cosmetics testing call for a change in the paradigm regarding the current 'whole animal' approach for identifying chemical hazards, including the assessment of potential neurotoxins. Accordingly, since 2004, we have worked on the development of the integrated co-culture of post-mitotic, human-derived neurons and astrocytes (NT2.N/A), for use as an in vitro functional central nervous system (CNS) model. We have used it successfully to investigate indicators of neurotoxicity. For this purpose, we used NT2.N/A cells to examine the effects of acute exposure to a range of test chemicals on the cellular release of brain-derived neurotrophic factor (BDNF). It was demonstrated that the release of this protective neurotrophin into the culture medium (above that of control levels) occurred consistently in response to sub-cytotoxic levels of known neurotoxic, but not non-neurotoxic, chemicals. These increases in BDNF release were quantifiable, statistically significant, and occurred at concentrations below those at which cell death was measureable, which potentially indicates specific neurotoxicity, as opposed to general cytotoxicity. The fact that the BDNF immunoassay is non-invasive, and that NT2.N/A cells retain their functionality for a period of months, may make this system useful for repeated-dose toxicity testing, which is of particular relevance to cosmetics testing without the use of laboratory animals. In addition, the production of NT2.N/A cells without the use of animal products, such as fetal bovine serum, is being explored, to produce a fully-humanised cellular model.
Resumo:
Background Monocytes are implicated in the initiation and progression of the atherosclerotic plaque contributing to its instability and rupture. Although peripheral monocytosis has been related to poor clinical outcome post ST elevation myocardial infarction (STEMI), only scarce information is available of mechanisms of this association. Tumour necrosis factor alpha (TNFα) is a key cytokine in the acute phase inflammatory response, and it is predominantly produced by inflammatory macrophages. Little is known about TNFα association with circulating monocyte subpopulations post STEMI. Method A total of 142 STEMI patients (mean age 62±13 years; 72% male) treated with percutaneous revascularization were recruited with blood samples obtained within first 24 hours from the onset and on day 10-14. Peripheral blood monocyte subpopulations were enumerated and characterized using flow cytometry after staining for CD14, CD16 and CCR2 and were defined as: CD14++CD16-CCR2+ (Mon1), CD14++CD16+CCR+ (Mon2) and CD14+CD16++CCR2- (Mon3) cells. Plasma levels of TNFα were measured by enzyme-linked immunosorbent assay (ELISA, Peprotec system, UK). Major adverse cardiac events (MACE), defined as recurrent STEMI, new diagnosis of heart failure and death were recorded at follow up, mean of 164±134 days. Results TNFα levels were significantly higher 24 hours post STEMI, compared to day 14 (paired t-test, p <0.001) with day 1 levels weakly correlated with total monocyte count as well as Mon1 (Spearman’s correlation, r=0.19, p=0.02 and r=0.22, p=0.01, respectively). There was no correlation between TNFα and Mon2 or Mon3 subpopulations. TNFα levels were significantly higher in patients with a recorded MACE (n=28, Mann-Whitney test, p<0.001) (figure 1).⇓
Resumo:
It is generally assumed by educators that inservice training will make a significant difference in teacher knowledge of topics related to education. This investigation addressed that assumption by examining the effects of various factors, e.g., amount and timing of inservice training, upon teacher knowledge of educational law. Of special interest was teacher knowledge of the law as it pertained to ethnic and other characteristics of students in urban school settings. This study was deliberately designed to determine which factors should be later investigated in a more deterministic form, e.g., an experimental design.^ The investigation built upon that of Ogletree (1985), Osborne (1996) and others who focused on the importance of teacher development as a method to enhance professional abilities. The main question addressed in this study was, "How knowledgeable are teachers of school law, especially with regard to general school law, the Meta Consent Decree and Section 504 of the Rehabilitation Act of 1973."^ The study participants (N = 302) were from the Dade County School System, the fourth largest in the U.S. The survey design (approved by the System), specified participants from all levels and types of schools and geographic representations. A survey instrument was created, pilot tested, revised and approved for use by the district official representatives. After administration of the instrument, the resultant data was treated by several appropriate tests, e.g., multivariate analysis of variance (ANOVA).^ Several findings emerged from the analysis of the data: in general, teachers did not have sufficient knowledge of school law; factors, such as amount and level of education, and status and position were positively correlated with increased knowledge; factors such as years of experience, gender, race and ethnicity were not correlated with higher levels of knowledge. The most significant, however, was that when teachers had participated in several inservice training experiences, typically workshops, and, when combined with other factors noted above, their knowledge of school law was significantly higher. Specific recommendations for future studies were made. ^
Resumo:
The purpose of this study was to critically evaluate Tom Stoppard’s application of chaos theory and quantum science in ROSENCRANTZ AND GUILDENSTERN ARE DEAD, HAPGOOD and ARCADIA; and determine the extent to which Stoppard argues for the importance of human action and choice. ^ Through critical analysis this study examined how Stoppard applies the quantum aspects of: (1) indeterminacy to human epistemology in ROSENCRANTZ AND GUILDENSTERN ARE DEAD; (2) complementarity to human identity in HAPGOOD; and (3) recursive symmetry to human history in ARCADIA. It also examined how Stoppard excavates the complexities of human action, choice and identity through the lens of chaos theory and quantum science. ^ These findings demonstrated that Tom Stoppard is not merely juxtaposing quantum science and human interactions for the sake of drama; rather, by excavating the complexities of human action, choice and identity through the lens of chaos theory and quantum science, Stoppard demonstrates the fundamental connection between individuals and the post-Newtonian universe.^
Resumo:
Novel predator introductions are thought to have a high impact on native prey, especially in freshwater systems. Prey may fail to recognize predators as a threat, or show inappropriate or ineffective responses. The ability of prey to recognize and respond appropriately to novel predators may depend on the prey’s use of general or specific cues to detect predation threats.We used laboratory experiments to examine the ability of three native Everglades prey species (Eastern mosquitofish, flagfish and riverine grass shrimp) to respond to the presence, as well as to the chemical and visual cues of a native predator (warmouth) and a recentlyintroduced non-native predator (African jewelfish). We used prey from populations that had not previously encountered jewelfish. Despite this novelty, the native warmouth and nonnative jewelfish had overall similar predatory effects, except on mosquitofish, which suffered higher warmouth predation. When predators were present, the three prey taxa showed consistent and strong responses to the non-native jewelfish, which were similar in magnitude to the responses exhibited to the native warmouth. When cues were presented, fish prey responded largely to chemical cues, while shrimp showed no response to either chemical or visual cues. Overall, responses by mosquitofish and flagfish to chemical cues indicated low differentiation among cue types, with similar responses to general and specific cues. The fact that antipredator behaviours were similar toward native and non-native predators suggests that the susceptibility to a novel fish predator may be similar to that of native fishes, and prey may overcome predator novelty, at least when predators are confamilial to other common and longer-established non-native threats.
Resumo:
One in five adults 65 years and older has diabetes. Coping with diabetes is a lifelong task, and much of the responsibility for managing the disease falls upon the individual. Reports of non-adherence to recommended treatments are high. Understanding the additive impact of diabetes on quality of life issues is important. The purpose of this study was to investigate the quality of life and diabetes self-management behaviors in ethnically diverse older adults with type 2 diabetes. The SF-12v2 was used to measure physical and mental health quality of life. Scores were compared to general, age sub-groups, and diabetes-specific norms. The Transtheoretical Model (TTM) was applied to assess perceived versus actual behavior for three diabetes self-management tasks: dietary management, medication management, and blood glucose self-monitoring. Dietary intake and hemoglobin A1c values were measured as outcome variables. Utilizing a cross-sectional research design, participants were recruited from Elderly Nutrition Program congregate meal sites (n = 148, mean age 75). ^ Results showed that mean scores of the SF-12v2 were significantly lower in the study sample than the general norms for physical health (p < .001), mental health (p < .01), age sub-group norms (p < .05), and diabetes-specific norms for physical health (p < .001). A multiple regression analysis found that adherence to an exercise plan was significantly associated with better physical health (p < .001). Transtheoretical Model multiple regression analyses explained 68% of the variance for % Kcal from fat, 41% for fiber, 70% for % Kcal from carbohydrate, and 7% for hemoglobin A 1c values. Significant associations were found between TTM stage of change and dietary fiber intake (p < .01). Other significant associations related to diet included gender (p < .01), ethnicity (p < .05), employment (p < .05), type of insurance (p < .05), adherence to an exercise plan (p < .05), number of doctor visits/year ( p < .01), and physical health (p < .05). Significant associations were found between hemoglobin A1c values and age ( p < .05), being non-Hispanic Black (p < .01), income (p < .01), and eye problems (p < .05). ^ The study highlights the importance of the beneficial effects of exercise on quality of life issues. Furthermore, application of the Transtheoretical Model in conjunction with an assessment of dietary intake may be valuable in helping individuals make lifestyle changes. ^
Resumo:
We consider a class of initial data sets (Σ,h,K) for the Einstein constraint equations which we define to be generalized Brill (GB) data. This class of data is simply connected, U(1)²-invariant, maximal, and four-dimensional with two asymptotic ends. We study the properties of GB data and in particular the topology of Σ. The GB initial data sets have applications in geometric inequalities in general relativity. We construct a mass functional M for GB initial data sets and we show:(i) the mass of any GB data is greater than or equals M, (ii) it is a non-negative functional for a broad subclass of GB data, (iii) it evaluates to the ADM mass of reduced t − φi symmetric data set, (iv) its critical points are stationary U(1)²-invariant vacuum solutions to the Einstein equations. Then we use this mass functional and prove two geometric inequalities: (1) a positive mass theorem for subclass of GB initial data which includes Myers-Perry black holes, (2) a class of local mass-angular momenta inequalities for U(1)²-invariant black holes. Finally, we construct a one-parameter family of initial data sets which we show can be seen as small deformations of the extreme Myers- Perry black hole which preserve the horizon geometry and angular momenta but have strictly greater energy.
Resumo:
Large marine areas and regional seas present a challenge in terms of management. They are often bordered by numerous maritime jurisdictions; with multi-use and multi-sector environments; involving varying governance arrangements; and generation of sufficient levels of data to best inform decision-makers. Marine management at the regional scale involves a range of mechanisms and approaches to ensure all relevant stakeholders have an opportunity to engage in the process; and these approaches can differ in their legal and regulatory conditions. At present, no such comparable structures exist at the transnational level for the ecosystem-based management of the Celtic Sea. Against this backdrop, a participative process, involving representatives from differing sectors of activity in the Celtic Sea spanning four Member States, was established for the purpose of identifying realistic and meaningful management principles in line with the goals of the Marine Strategy Framework Directive.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The objective of this study was to assess seasonal variation in nutritional status and feeding practices among lactating mothers and their children 6-23 months of age in two different agro-ecological zones of rural Ethiopia (lowland zone and midland zone). Food availability and access are strongly affected by seasonality in Ethiopia. However, there are few published data on the effects of seasonal food fluctuations on nutritional status and dietary diversity patterns of mothers and children in rural Ethiopia. A longitudinal study was conducted among 216 mothers in two agro-ecological zones of rural Ethiopia during pre and post-harvest seasons. Data were collected on many parameters including anthropometry, blood levels of haemoglobin and ferritin and zinc, urinary iodine levels, questionnaire data regarding demographic and household parameters and health issues, and infant and young child feeding practices, 24 h food recall to determine dietary diversity scores, and household use of iodized salt. Chi-square and multivariable regression models were used to identify independent predictors of nutritional status. A wide variety of results were generated including the following highlights. It was found that 95.4% of children were breastfed, of whom 59.7% were initially breastfed within one hour of birth, 22.2% received pre-lacteal feeds, and 50.9% of children received complementary feedings by 6 months of age. Iron deficiency was found in 44.4% of children and 19.8% of mothers. Low Zinc status was found in 72.2% of children and 67.3% of mothers. Of the study subjects, 52.5% of the children and 19.1% of the mothers were anaemic, and 29.6% of children and 10.5% of mothers had iron deficiency anaemia. Among the mothers with low serum iron status, 81.2% and 56.2% of their children had low serum zinc and iron, respectively. Similarly, among the low serum zinc status mothers, 75.2% and 45.3% of their children had low serum in zinc and iron, respectively. There was a strong correlation between the micronutrient status of the mothers and the children for ferritin, zinc and haemoglobin (P <0.001). There was also statistically significant difference between agro-ecological zones for micronutrient deficiencies among the mothers (p<0.001) but not for their children. The majority (97.6%) of mothers in the lowland zone were deficient in at least one micronutrient biomarker (zinc or ferritin or haemoglobin). Deficiencies in one, two, or all three biomarkers of micronutrient status were observed in 48.1%, 16.7% and 9.9% of mothers and 35.8%, 29.0%, and 23.5%, of children, respectively. Additionally, about 42.6% of mothers had low levels of urinary iodine and 35.2% of lactating mothers had goitre. Total goitre prevalence rates and urinary iodine levels of lactating mothers were not significantly different across agro-ecological zones. Adequately iodised salt was available in 36.6% of households. The prevalence of anaemia increased from post-harvest (21.8%) to pre-harvest seasons (40.9%) among lactating mothers. Increases were from 8.6% to 34.4% in midland and from 34.2% to 46.3% in lowland agro-ecological zones. Fifteen percent of mothers were anaemic during both seasons. Predictors of anaemia were high parity of mother and low dietary diversity. The proportion of stunted and underweight children increased from 39.8% and 27% in post-harvest season to 46.0% and 31.8% in pre-harvest season, respectively. However, wasting in children decreased from 11.6% to 8.5%. Major variations in stunting and underweight were noted in midland compared to lowland agroecological zones. Anthropometric measurements in mothers indicated high levels of undernutrition. The prevalence of undernutrition in mothers (BMI <18.5kg/m2) increased from 41.7 to 54.7% between post- and pre-harvest seasons. The seasonal effect was generally higher in the midland community for all forms of malnutrition. Parity, number of children under five years and regional variation were predictors of low BMI among lactating mothers. There were differences in minimum meal frequency, minimum acceptable diet and dietary diversity in children in pre-harvest and post-harvest seasons and these parameters were poor in both seasons. Dietary diversity among mothers was higher in lowland zone but was poor in both zones across the seasons. In conclusion, malnutrition and micronutrient deficiencies are very prevalent among lactating mothers and their children 6-23 months old in the study areas. There are significant seasonal variations in malnutrition and dietary diversity, in addition to significant differences between lowland and midland agro-ecological zones. These findings suggest a need to design effective preventive public health nutrition programs to address both the mothers’ and children’s needs particularly in the preharvest season.
Resumo:
I distinguish two ways that philosophers have approached and explained the reality and status of human social institutions. I call these approaches “naturalist” and “post-naturalist”. Common to both approaches is an understanding that the status of mind and its relation to the world or “nature” has implications on a conception of the status of institutional reality. Naturalists hold that mind is explicable within a scientific frame that conceives of mind as a fundamentally material process. By proxy, social reality is also materially explicable. Post-naturalists critique this view, holding instead that naturalism is parasitic on contemporary science—it therefore is non-compulsory and distorts how we ought to understand mind and social reality. A comparison of naturalism and post-naturalism will comprise the content of the first chapter. The second chapter turns to tracing out the dimensions of a post-naturalist narrative of mind and social reality. Post-naturalists conceive of mind and its activity of thought as sui generis, and it transpires from this that social institutions are better understood as a rational mind’s mode of the expression in the world. Post-naturalism conceives of social reality as a necessary dimension of thought. Thought requires a second person and thereby a tradition or context of norms that come to both structure its expression and become the products of expression. This is in contrast to the idea that social reality is a production of minds, and thereby derivative. Social reality, self-conscious thought, and thought of the second person are therefore three dimensions of a greater unity.